⬅️  To Main Bash jobs

Create flexible and precise queries that fit your needs exactly. Example: React.js, -USA  × Laravel, Vue.js, -Contract  × will get you jobs that are (React.js and not in USA) or (Laravel and Vue.js and not Contract/Freelance).

You can mix and match any tags, negations and groups in any order. And don't worry about typos – the search is fuzzy.

Dismiss

Chorus One Remote
About Us Chorus One is one of the leading operators of infrastructure for Proof-of-Stake networks and decentralized protocols. Tens of thousands of retail customers and institutions are staking billions in assets through our infrastructure helping to secure protocols and earn rewards. Our mission is to increase freedom and speed of innovation through decentralized technologies. We are a diverse team of around 60 people distributed all over the globe. We value radical transparency, striving for excellence and improvement while treating each other with kindness and generosity. If this sounds like you, we’d love to hear from you. Role As a Platform Engineer, you must be able to work in an ambiguous environment with an optimistic attitude, that can easily (and happily) wear many hats. You are able to uphold our company’s principles and values while adding to our unique culture. As a member of our Team, you will br

About Us

Chorus One is one of the leading operators of infrastructure for Proof-of-Stake networks and decentralized protocols. Tens of thousands of retail customers and institutions are staking billions in assets through our infrastructure helping to secure protocols and earn rewards. Our mission is to increase freedom and speed of innovation through decentralized technologies.

We are a diverse team of around 60 people distributed all over the globe. We value radical transparency, striving for excellence and improvement while treating each other with kindness and generosity. If this sounds like you, we’d love to hear from you.

Role

As a Platform Engineer, you must be able to work in an ambiguous environment with an optimistic attitude, that can easily (and happily) wear many hats. You are able to uphold our company’s principles and values while adding to our unique culture. As a member of our Team, you will broadly be responsible for working collaboratively with team members and partners across projects. The right person will be responsible for maintaining, scaling, and monitoring existing infrastructure, including cloud machines, bare-metal servers, and a Kubernetes cluster, to allow Chorus One to provide secure and reliable industry-leading Proof-of-Stake validation services. Leading with intellectual curiosity with a high standard of excellence as well as ensuring best practice across development, testing, and security.

The Platforms Engineers within our Infrastructure team play a pivotal role in laying the foundation for Chorus One’s ongoing growth, thus we are continuously expanding our team and seeking skilled Platforms Engineers to join us.

Job Type: Full-time

Responsibilities

  • Maintain, scale and monitor existing infrastructure, including bare metal servers, cloud machines, and a Kubernetes cluster, to allow Chorus One to provide secure and reliable industry-leading Proof-of-Stake validation services.
  • Institute monitoring and alerting systems for infrastructure. Enable other team members to attend to and troubleshoot problems as they arise.
  • Take all steps required to ensure maximum availability and uptime of maintained blockchain networks. This includes routine and emergent application updates, monitoring of systems, timely response to alerts and on-call phone calls, as well as rapid response to mitigate site outages.
  • Develop software related to blockchain data extraction or interchain communication, on an as-needed basis.
  • Reason about and improve the security properties of infrastructure. Secure key management, server hardening, and intrusion detection are important themes.

What we are looking for

  • Strong Linux skills, including a deep understanding of the kernel and userspace.
  • Strong networking skills, including a deep understanding of routing, forwarding, and load-balancing.
  • Has prior experience with a range of orchestration and configuration management tools. We use Kubernetes, Ansible, Terraform and Salt.
  • Is able to develop tooling, and monitoring solutions where none exist and can debug unreliable software - many of the projects we run are of alpha quality.
  • Possesses good knowledge of security as it relates to bare metal and cloud-based infrastructure.
  • We are not just looking for someone who can configure an existing cloud solution, but we are looking for someone who can build the infrastructure underlying a cloud.
  • We expect candidates to know one programming language in addition to Bash, and we test for this with a coding interview. We don’t expect you to be a seasoned software engineer, but we do expect you to be comfortable writing small programs. Popularly used languages at Chorus One are Python, Golang and Rust.
  • Is able to work independently, with a high level of ambiguity and undefined requirements.
  • Bachelor’s or advanced degree in Computer Science or a related subject is a plus, but not strictly needed.
  • You are able to work in the following time zone: Switzerland ± 6 hours.

Our Offer

  • Autonomy and ownership in a friendly and supportive work environment and the opportunity for rapid growth.
  • Competitive fixed compensation (USD 100k - 140k) + equity.
  • All-expense paid biannually team retreats at various destinations (Coronavirus permitting). Past retreats took place in Egypt, Serbia, Kenya, USA, South Korea, and Dubai.
  • Remote, but not alone. We are a strong global collaborative environment.
  • Remote working budget (Laptop, co-working space, etc)
  • Personal development budget
  • Gather experience and build your network in the vibrant crypto ecosystem.
  • Learn about state-of-the-art protocols that lay the foundation for an open, transparent, and programmable financial system.
Permalink

Biconomy Remote
Biconomy empowers Web3 developers to build seamless, user-friendly dApps that work effortlessly across multiple blockchains. Our battle-tested modular account and execution stack eliminates traditional UX friction points, helping projects accelerate user adoption while reducing development costs. By processing over 50 million transactions across the 300+ dApps we’ve served, we’re powering the future of onchain economies. The Role: Innovating at the Intersection of AI and DeFi We are assembling a world-class team to redefine on-chain analytics using AI Agents & Machine Learning. As a DevOps Engineer, you will build and optimize the AI & ML infrastructure, enabling real-time wallet activity analysis, ML-driven tagging, and PnL insights at scale. You’ll work with Kafka, Snowflake, AWS S3, and high-speed data pipelines to process over 10-15 billion rows of historical data and real-time streaming events, ensuring a scalable, secure, and efficien

Biconomy empowers Web3 developers to build seamless, user-friendly dApps that work effortlessly across multiple blockchains. Our battle-tested modular account and execution stack eliminates traditional UX friction points, helping projects accelerate user adoption while reducing development costs. By processing over 50 million transactions across the 300+ dApps we’ve served, we’re powering the future of onchain economies.

The Role: Innovating at the Intersection of AI and DeFi

We are assembling a world-class team to redefine on-chain analytics using AI Agents & Machine Learning. As a DevOps Engineer, you will build and optimize the AI & ML infrastructure, enabling real-time wallet activity analysis, ML-driven tagging, and PnL insights at scale. You’ll work with Kafka, Snowflake, AWS S3, and high-speed data pipelines to process over 10-15 billion rows of historical data and real-time streaming events, ensuring a scalable, secure, and efficient AI ecosystem.

This is a high-impact role at the core of our AI-driven crypto intelligence platform.

What Will You Be Doing?

Scalable AI & ML Infrastructure

  • Design & optimize cloud-native architectures for AI & ML-driven analytics on DeFi transactions.
  • Develop and maintain high-performance, distributed computing environments that process billions of on-chain and off-chain events.
  • Deploy and manage ML models & AI agents efficiently in Kubernetes (K8s) and other containerized environments.

High-Speed Data Engineering

  • Design real-time streaming pipelines using Kafka for high-frequency on-chain transaction ingestion.
  • Optimize Snowflake queries & storage solutions to handle large-scale blockchain datasets efficiently.
  • Implement ETL/ELT pipelines for structured & unstructured blockchain data aggregation.

Infrastructure Automation & Reliability

  • Automate cloud infrastructure using Terraform, Pulumi, or CloudFormation for seamless scaling.
  • Enhance security & performance with CI/CD best practices for AI model deployment.
  • Implement observability, logging & monitoring with tools like Prometheus, Grafana, and Datadog.

ML & AI Model Deployment

  • Streamline model deployment for AI agents that analyze blockchain wallet behavior.
  • Build infrastructure to support model training, inference, and real-time decision-making.
  • Optimize GPU workloads for AI-driven pattern recognition & risk analysis.

Collaboration Across Teams

  • Work alongside AI engineers, blockchain developers, and data scientists to integrate AI-driven insights into DeFi tools.
  • Optimize node infrastructure & RPC services for multi-chain DeFi interactions.
  • Research and implement best practices in Web3 DevOps, cloud automation, and ML infrastructure.

Requirements:

Core Experience

  • 5+ years of DevOps experience, specializing in AI/ML infrastructure & high-scale data pipelines.
  • Expertise in AWS, GCP, or Azure, focusing on scalable, event-driven architectures.
  • Experience handling Kafka queues, Snowflake databases, and massive-scale data processing.

DevOps & Infrastructure Skills

  • CI/CD Automation using GitHub Actions, CircleCI, Jenkins, or ArgoCD.
  • Kubernetes & Docker for managing AI workloads and high-performance model inference.
  • Infrastructure-as-Code (IaC): Terraform, Pulumi, or CloudFormation.

Data Engineering & AI Model Ops

  • Experience with Kafka, Snowflake, and S3 for real-time & historical data processing.
  • ML Model Deployment: Knowledge of TensorFlow, PyTorch, or ONNX for AI-based wallet analytics.
  • Strong Python & Bash scripting for automation and orchestration.

Security & Reliability

  • Security-first mindset: Experience with IAM, firewall management, and Web3 security best practices.
  • Observability & Monitoring using Prometheus, Grafana, Datadog, or OpenTelemetry.

Bonus Skills

  • Experience with multi-chain blockchain infrastructure (Ethereum, Solana, L2s).
  • Knowledge of DeFi protocols, smart contracts, and risk analytics.
  • Experience with AI inference at scale (Ray, Triton, Hugging Face Transformers, etc.)

What We Offer:

  • Flexible Working Hours: Enjoy autonomy over your schedule.
  • Generous Vacation Policy: 25 days vacation per year plus public holidays
  • Competitive Salary: With regular performance reviews.
  • Token Allocation: Be rewarded with tokens as part of our compensation package.
  • Growth Opportunities: Be part of an exciting new project with significant career growth potential.
  • Innovative Work Culture: Join a team that’s at the cutting edge of Web3, AI, and DeFi, and help shape the future of the digital economy.
  • Fun and Engaging Team Activities: Game nights, virtual celebrations, and work retreats to keep things exciting.

At Biconomy, we believe in creating a diverse and inclusive workplace. We are committed to being an equal-opportunity employer, and we do not discriminate based on race, national origin, gender, gender identity, sexual orientation, disability, veteran status, age, or any other legally protected status.

Permalink

Zinnia Remote
WHO WE ARE: Zinnia is the leading technology platform for accelerating life and annuities growth. With innovative enterprise solutions and data insights, Zinnia simplifies the experience of buying, selling, and administering insurance products. All of which enables more people to protect their financial futures. Our success is driven by a commitment to three core values: be bold, team up, deliver value – and that we do. Zinnia has over $180 billion in assets under administration, serves 100+ carrier clients, 2500 distributors and partners, and over 2 million policyholders.WHO YOU ARE: A Java DevOps Engineer with 5+ years of experience playing a pivotal role in delivering high-quality software solutions. As a Java DevOps Engineer, you will be responsible for automating and streamlining the software development and deployment process, ensuring efficient and reliable delivery of our products. WHAT YOU’LL DO: DevOps Infrastructure: Design, build, and m

WHO WE ARE: Zinnia is the leading technology platform for accelerating life and annuities growth. With innovative enterprise solutions and data insights, Zinnia simplifies the experience of buying, selling, and administering insurance products. All of which enables more people to protect their financial futures. Our success is driven by a commitment to three core values: be bold, team up, deliver value – and that we do. Zinnia has over $180 billion in assets under administration, serves 100+ carrier clients, 2500 distributors and partners, and over 2 million policyholders.WHO YOU ARE: A Java DevOps Engineer with 5+ years of experience playing a pivotal role in delivering high-quality software solutions. As a Java DevOps Engineer, you will be responsible for automating and streamlining the software development and deployment process, ensuring efficient and reliable delivery of our products. WHAT YOU’LL DO: 

DevOps Infrastructure:

Design, build, and maintain robust and scalable DevOps infrastructure, including CI/CD pipelines, source control systems, and deployment environments. Automate infrastructure provisioning and configuration management using tools like Ansible, Terraform, or Puppet. Collaborate with development teams to ensure smooth integration of DevOps practices into their workflows.

Java Development:

Contribute to the development and maintenance of Java-based applications, following best practices and coding standards. Work closely with development teams to understand their requirements and provide technical guidance. Participate in code reviews and ensure code quality.

Maven:

Utilize Maven for build automation, dependency management, and project management. Create Maven build scripts and configurations. Manage dependencies and resolve conflicts.

Docker and Containerization:

Manage containerized environments using tools like Kubernetes or Docker Swarm. Leverage Docker to containerize applications and simplify deployment. Optimize container images for performance and security.

CI/CD Pipelines:

Design and implement efficient CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or CircleCI. Automate testing, code quality checks, and deployment processes. Ensure pipeline security and compliance with best practices.

AWS Services:

Utilize AWS services effectively for infrastructure provisioning, deployment, and management. Implement cloud-native architectures and best practices. Optimize cloud resources for cost-efficiency and performance.

Log Aggregation:

Analyze logs to identify and troubleshoot issues. Implement log aggregation and analysis tools (e.g., ELK Stack, Splunk) to monitor application and infrastructure health.

WHAT YOU’LL NEED:

Bachelor’s/Master's degree in Computer Science, IT Engineering, or a related field. 8+ years of experience as a Java developer or DevOps engineer. Strong programming skills in Java and at least one scripting language (e.g., Bash, Python, Groovy). Experience with DevOps tools and practices, including CI/CD pipelines, version control systems, and infrastructure automation. In-depth knowledge of Docker and containerization technologies. Experience with container orchestration platforms like Kubernetes or Docker Swarm. Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and cloud-native technologies. Good problem-solving and analytical skills. Strong communication and collaboration skills. 

Preferred Skills:

Experience with infrastructure as code (IaC) tools like Terraform or Ansible. Knowledge of monitoring and alerting tools (e.g., Prometheus, Grafana). Familiarity with configuration management tools like Ansible or Puppet.

WHAT’S IN IT FOR YOU? At Zinnia, you collaborate with smart, creative professionals who are dedicated to delivering cutting-edge technologies, deeper data insights, and enhanced services to transform how insurance is done. Visit our website at www.zinnia.com for more information. Apply by completing the online application on the careers section of our website. We are an Equal Opportunity employer committed to a diverse workforce. We do not discriminate based on race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability.

LI-RS1

Apply Now:

Permalink