Sr. Data Engineer (AWS)
The UMB Alternative Investments - Support team provides support to the development of the software used by our Fund Accountants, external Alternative Investment client teams, and Investors to manage the investment portfolio. This robust software allows them competitive advantages within the Alternative Investments industry which continues to evolve and provides exciting opportunities for individuals to learn new skills, administer new tools, and grow their career.
As a Sr. Data Engineer, you will be responsible for building and optimizing data architecture, data pipelines in support of UMB’s Alternative Investments web applications including various other applications. You will work closely with the team to design and build enterprise-level solutions. You will look for ways to assist in improving development efficiency and implementing best practices. This is a subset of the overall responsibilities which involves other multiple initiatives as assigned by IT leadership.
This role is hybrid (4 days on-site / 1 day remote) at our Ogden, UT or Kansas City, MO metropolitan locations.
How you’ll spend your time:
- Design and develop cloud native data systems and architecture for application, API and reporting needs.
- Build resilient data pipelines (ETL/ELT) for incremental and initial data loads into OLTP and OLAP databases using tools such as Glue, Airflow, Step functions.
- Ability to optimize SQL queries for performance and evangelize best practices to the team.
- Troubleshoot data pipeline, integration, and deployment issues.
- Implement monitoring and observability to ensure data timeliness and correctness into destination systems.
- Become a domain expert on the Alternative Investments business and systems.
- Provide mentorship to members of the team and help foster a learning environment.
We’re excited to talk if you have:
- Bachelor’s degree in computer science, Data Engineering or similar discipline including at least 5 years of industry experience.
- Proficiency in SQL and programming languages like Python, Spark etc.
- Expertise in cloud technologies such as AWS or Azure including experience implementing cloud-based solutions using Serverless, Containers, Observability, and Security.
- Understanding of data systems and have optimized pipelines for cost efficiency and performance.
- Experience with relational (PostgreSQL) and non-relational databases (DynamoDB), and comfortable with writing SQL and NoSQL queries.
- Working knowledge of CI/CD or DevOps.
Bonus Points if you have:
- Data Engineering Certifications or done data engineering courses.
- Built pipelines that process large volumes of data.
- Experience in building real time or near real time data pipelines.
- Experience using Gen AI tools for Code Generation and SDLC Processes.





















































