Description:We are seeking a highly technical and self-directed Senior Software Engineer to contribute to the development of data processing pipelines for a new AI-enabled data analytics product targeted at Large Ag customers.Ideal candidates will have:5+ years of professional software development experience using Python2+ years of hands-on experience with AWS and Databricks in production environmentsWe are looking for mid-career professionals with a proven track record of deploying cloud-native solutions in fast-paced software delivery environments.In addition to technical expertise, successful candidates will demonstrate:Strong communication skills, with the ability to clearly articulate technical concepts to both technical and non-technical stakeholders (this is extremely important)The ability to work effectively with limited supervision in a distributed team environmentA proactive mindset, adaptability, and a commitment to team successKey Responsibilities:Design and implement AWS/Databricks solutions to process large geospatial datasets for real-time API servicesDevelop and maintain REST APIs and backend processes using AWS LambdaBuild infrastructure as code using TerraformSet up and maintain CI/CD pipelines using GitHub ActionsOptimize system performance and workflows to improve scalability and reduce cloud costsEnhance monitoring and alerting across systems using DatadogSupport field testing and customer operations by debugging and resolving data issuesCollaborate with product managers and end users to understand requirements, build backlog, and prioritize workWork closely with data scientists to productionize prototypes and proof-of-concept modelsRequired Skills & Experience:Excellent coding skills in Python with experience deploying production-grade softwareStrong foundation in test-driven developmentSolid understanding of cloud computing, especially AWS services such as IAM, Lambda, S3, RDSProfessional experience building Databricks workflows and optimizing PySpark queriesPreferred Experience:Experience working with geospatial data and related libraries/toolsExperience building and operating API using AWS lambdaFamiliarity with data lake architectures and Delta LakeExperience with event-driven architectures and streaming data pipelines (e.g., Kafka, Kinesis)Exposure to ML Ops or deploying machine learning models in productionPrior experience in cross-functional teams involving product, data science, and backend engineering teamsNotes:Onsite VIVA is an equal opportunity employer. All qualified applicants have an equal opportunity for placement, and all employees have an equal opportunity to develop on the job. This means that VIVA will not discriminate against any employee or qualified applicant on the basis of race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status
Description:We are seeking a highly technical and self-directed Senior Software Engineer to contribute to the development of data processing pipelines for a new AI-enabled data analytics product targeted at Large Ag customers.Ideal candidates will have:5+ years of professional software development experience using Python2+ years of hands-on experience with AWS and Databricks in production environmentsWe are looking for mid-career professionals with a proven track record of deploying cloud-native solutions in fast-paced software delivery environments.In addition to technical expertise, successful candidates will demonstrate:Strong communication skills, with the ability to clearly articulate technical concepts to both technical and non-technical stakeholders (this is extremely important)The ability to work effectively with limited supervision in a distributed team environmentA proactive mindset, adaptability, and a commitment to team successKey Responsibilities:Design and implement AWS/Databricks solutions to process large geospatial datasets for real-time API servicesDevelop and maintain REST APIs and backend processes using AWS LambdaBuild infrastructure as code using TerraformSet up and maintain CI/CD pipelines using GitHub ActionsOptimize system performance and workflows to improve scalability and reduce cloud costsEnhance monitoring and alerting across systems using DatadogSupport field testing and customer operations by debugging and resolving data issuesCollaborate with product managers and end users to understand requirements, build backlog, and prioritize workWork closely with data scientists to productionize prototypes and proof-of-concept modelsRequired Skills & Experience:Excellent coding skills in Python with experience deploying production-grade softwareStrong foundation in test-driven developmentSolid understanding of cloud computing, especially AWS services such as IAM, Lambda, S3, RDSProfessional experience building Databricks workflows and optimizing PySpark queriesPreferred Experience:Experience working with geospatial data and related libraries/toolsExperience building and operating API using AWS lambdaFamiliarity with data lake architectures and Delta LakeExperience with event-driven architectures and streaming data pipelines (e.g., Kafka, Kinesis)Exposure to ML Ops or deploying machine learning models in productionPrior experience in cross-functional teams involving product, data science, and backend engineering teamsNotes:Onsite
(Please ensure email matches your resume email)
(document types allowed: doc/docx/rtf/pdf/txt) (max 2MB)
By submitting this form, you are consenting to the VIVA team contacting you via Phone/Email