Work At Home-Texas
Description:
Staff Database Engineer
Overview:
We are seeking an experienced Staff Database Engineer (contractor) to design, build, and optimize complex data systems. This senior-level contractor will work across multiple domains including data architecture, pipeline development, and system operations.
The Staff Database Engineer will join existing engineering teams to support and enhance current systems. Responsibilities include maintaining ETL pipelines, migrating SSIS packages to Azure Data Factory (ADF), and working with Kafka for message publishing and consumption.
Key Responsibilities:
Design and implement scalable and reliable data architectures that support large-scale data processing, transformation, and analysis.
Develop, maintain, and optimize ETL/ELT pipelines using modern tools and frameworks to move and transform data from diverse sources (flat files, streaming systems, REST APIs, EHRs, etc.).
Build and support high-performance, cloud-based systems for real-time and batch processing (e.g., data lakes, warehouses, and mesh architectures).
Collaborate with stakeholders across engineering, data science, and product teams to gather requirements and deliver actionable data solutions.
Interface with Electronic Health Records (EHR) and healthcare data formats to ensure integration accuracy and compliance.
Own operational excellence for data systems including logging, monitoring, alerting, and incident response.
Utilize advanced programming skills (.NET, Java, or similar) and SQL to engineer robust data services.
Contribute to architecture frameworks and documentation to guide team standards and best practices.
Act as a subject matter expert (SME), mentoring junior engineers and promoting engineering excellence across the organization.
Qualifications:
7–10+ years of professional experience in data engineering, software development, or database systems.
Proven experience with SSIS and ADF.
Bachelor's degree in Computer Science, Engineering, or related field—or equivalent experience.
Expertise in SQL, database systems, and modern data processing tools and frameworks.
Strong proficiency in at least one programming language (.NET, Java, Python, etc.).
Demonstrated experience with modern cloud platforms (Azure, AWS, or GCP).
Familiarity with data streaming and queuing technologies (Kafka, SNS, RabbitMQ, etc.).
Understanding of CI/CD pipelines, infrastructure-as-code (Terraform), and containerized deployments (e.g., Kubernetes).
Comfortable with production system support, debugging, and performance optimization.
Strong problem-solving, communication, and collaboration skills.
High-level understanding of big data design patterns and architectural principles (e.g., data lake vs. warehouse vs. mesh).
Experience with RESTful APIs and integrating external data sources into internal systems.
Required Technical Skills:
Azure Data Factory (ADF): Experience building complex pipelines
SQL Server: Hands-on experience with data changes and publishing
SSIS (Visual Studio 2019): Understanding and converting existing packages
Kafka: Publishing/consuming messages and setting up new streams
Bonus: Familiarity with Change Data Capture (CDC), Cursor (AI tools), Python, C#
Soft Skills & Traits:
Strong communication and collaboration
Self-directed and proactive
Strategic thinking and process improvement
Real-time project experience with hands-on development
Industry Experience:
Healthcare experience is preferred but not mandatory. Familiarity with FHIR is a plus.
Notes:
Remote
VIVA is an equal opportunity employer. All qualified applicants have an equal opportunity for placement, and all employees have an equal opportunity to develop on the job. This means that VIVA will not discriminate against any employee or qualified applicant on the basis of race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status