Data Engineer
RIT Solutions, Inc.
Texas City, TX, TX
data
engineer
data
streaming
design
aws
data streaming
architecture
pipeline
devices
waveform
throughput
analysis
Apply with Tarta Assistant 🤖
Unleash the power of automation for your job search (Paid option) Apply Manually(Free)
I have time, I'll manually find and apply for jobs
Unleash the power of automation for your job search (Paid option) Apply Manually(Free)
I have time, I'll manually find and apply for jobs
90% of users say Tarta.ai Assistant helps them save time applying for jobs.
Not a member? Click
here to subscribe.
November 27, 2022
RIT Solutions, Inc.
Texas City, TX, TX
FULL_TIME
Job Responsibilities
Design and architect real-time data streaming architecture on AWS
Design and implement streaming ingestion pipeline to load data from bedside devices (both discrete vitals and waveform data)
Perform throughput analysis to determine if streaming or batch processing is more cost-effective for data sources
Utilize software development best practices such as version control via Git, CI/CD, and release management to build and deploy the streaming services and pipelines
Build automated data validation processes to ensure the quality and integrity of the datasets
Implement appropriate monitoring and observability solutions
Monitor cloud application performance for potential bottlenecks and resolve performance issues
Job Requirements
Experience working with Kafka, Spark Streams, Snowflake, Grafana, Prometheus, and AWS services (S3, Kinesis, TimeStream, Redshift, CloudWatch, EKS)
Experience working with HL7 data
Experience coding with Python/Java
Strong SQL, Data Warehousing, and Data Lake fundamentals
Hands-on experience with Linux (RHEL/Debian) operating system
Knowledge of version control systems such as Git
Experience consuming and building APIs
Experience utilizing Agile methodology for development
Design and architect real-time data streaming architecture on AWS
Design and implement streaming ingestion pipeline to load data from bedside devices (both discrete vitals and waveform data)
Perform throughput analysis to determine if streaming or batch processing is more cost-effective for data sources
Utilize software development best practices such as version control via Git, CI/CD, and release management to build and deploy the streaming services and pipelines
Build automated data validation processes to ensure the quality and integrity of the datasets
Implement appropriate monitoring and observability solutions
Monitor cloud application performance for potential bottlenecks and resolve performance issues
Job Requirements
Experience working with Kafka, Spark Streams, Snowflake, Grafana, Prometheus, and AWS services (S3, Kinesis, TimeStream, Redshift, CloudWatch, EKS)
Experience working with HL7 data
Experience coding with Python/Java
Strong SQL, Data Warehousing, and Data Lake fundamentals
Hands-on experience with Linux (RHEL/Debian) operating system
Knowledge of version control systems such as Git
Experience consuming and building APIs
Experience utilizing Agile methodology for development
Report this job