Data Engineer - Infrastructure



Other Engineering, Data Science
Remote · Mexico
Posted on Thursday, February 1, 2024


Oportun (Nasdaq: OPRT) is a digital banking platform that puts its 1.9 million members' financial goals within reach. With intelligent borrowing, savings, budgeting, and spending capabilities, Oportun empowers members with the confidence to build a better financial future. Since inception, Oportun has provided more than $15.5 billion in responsible and affordable credit, saved its members more than $2.3 billion in interest and fees, and helped our members save an average of more than $1,800 annually. For more information, visit


Working at Oportun means enjoying a differentiated experience of being part of a team that fosters a diverse, equitable and inclusive culture where we all feel a sense of belonging and are encouraged to share our perspectives. This inclusive culture is directly connected to our organization's performance and ability to fulfill our mission of delivering affordable credit to those left out of the financial mainstream. We celebrate and nurture our inclusive culture through our employee resource groups.

As a Data Engineer at Oportun, you will be a key member of our EDT team, responsible for designing, developing, and maintaining sophisticated software / data platforms in achieving the charter of the engineering group. Your mastery of a technical domain enables you to take up business problems and solve them with a technical solution. With your depth of expertise and leadership abilities, you will actively contribute to architectural decisions, mentor junior engineers, and collaborate closely with cross-functional teams to deliver high-quality, scalable software solutions that advance our impact in the market. This is a role where you will have the opportunity to take up responsibility in leading the technology effort – from technical requirements gathering to final successful delivery of the product - for large initiatives (cross-functional and multi-month long projects).
Key Responsibilities:

Data Pipeline Development and Maintenance:

  • Design, implement, and maintain robust, scalable data pipelines for the ingestion, processing, and transformation of large volumes of structured and unstructured data.
  • Optimize data workflows to ensure efficiency, reliability, and data quality throughout the pipeline.

Database Management:

  • Develop and maintain databases, data warehouses, and data lakes, ensuring proper organization, indexing, and optimization for fast data retrieval and analytics.
  • Implement and manage ETL (Extract, Transform, Load) processes to integrate data from various sources into the appropriate storage solutions.

Data Modeling and Architecture:

  • Design and implement efficient data models and architecture to support both operational and analytical use cases, balancing performance and scalability requirements.
  • Collaborate with data scientists and analysts to define data requirements and ensure the data infrastructure meets analytical needs.

Data Quality and Governance:

  • Implement data quality checks, validation processes, and monitoring systems to ensure data accuracy, completeness, and consistency.
  • Adhere to data governance policies and procedures, maintaining data privacy, security, and compliance with relevant regulations.

Performance Optimization:

  • Identify and resolve performance bottlenecks in data processing and retrieval, optimizing the system for improved speed, efficiency, and scalability.

Collaboration and Documentation:

  • Work closely with cross-functional teams, including data scientists, analysts, and software engineers, to understand data requirements and support their initiatives.
  • Create comprehensive documentation for data engineering processes, data models, and system architecture to ensure knowledge sharing and efficient onboarding of new team members.

Common Software Engineering Requirements

  • You collaborate with cross-functional teams, including product managers, designers, and other engineers, to understand business requirements and translate them into efficient and scalable software solutions.
  • You design, develop, test, deploy, support and maintain high-quality software applications using industry best practices and modern technologies. You own issues, including initial troubleshooting, identification of root cause and issue resolution/escalation.(Alternatively, to address verbosity, this can be replaced with “You own your code end-to-end.”, but it may not be explicitly clear.)
  • You write clean and maintainable code that adheres to industry coding standards and contributes to the overall stability of our systems. You participate in code reviews and provide constructive feedback to team members to ensure code quality and promote knowledge sharing.
  • You proactively find and address technical debt, inefficient practices/tools and performance bottlenecks and bugs, continuously improving the reliability and performance of our software through building observability and other features to help trouble-shoot/triage issues.
  • You demonstrate proficient usage of tools, techniques and architecture/coding patterns. You have demonstrated you are able to understand trade-offs of various architectural and design choices. Your solutions are focused on solving the needs of your customer.
  • You stay up-to-date with emerging technologies and industry trends, and proactively propose and implement innovative solutions to enhance our products and services through continuous evolution and refinement of current tools and applications.
  • Bachelor's or Master's degree in Computer Science, Data Science, or a related field.
  • Proven experience as a Data Engineer or in a similar role.
  • Strong proficiency in programming languages like Python, Java, or Scala.
  • Expertise in working with big data technologies such as Hadoop, Spark, and Kafka.
  • Proficiency in SQL and experience with database technologies (e.g., PostgreSQL, MySQL, NoSQL databases).
  • Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and their data services (e.g., AWS Redshift, S3, Azure SQL Data Warehouse).
  • Knowledge of data warehousing concepts, data modeling, and ETL processes.
  • Excellent problem-solving skills and attention to detail.
  • Strong communication and collaboration abilities.
This job description provides an overview of the key responsibilities and qualifications expected from a data engineer. However, specific requirements and expectations may vary based on the organization, industry, and project requirements.

We are proud to be an Equal Opportunity Employer and consider all qualified applicants for employment opportunities without regard to race, age, color, religion, gender, national origin, disability, sexual orientation, veteran status or any other category protected by the laws or regulations in the locations where we operate.

California applicants can find a copy of Oportun's CCPA Notice here:

We will never request personal identifiable information (bank, credit card, etc.) before you are hired. We do not charge you for pre-employment fees such as background checks, training, or equipment. If you think you have been a victim of fraud by someone posing as us, please report your experience to the FBI’s Internet Crime Complaint Center (IC3).