Ci&t

[Job-25819] Data Engineer Senior, Brazil

Brazil Full Time
We are tech transformation specialists, uniting human expertise with AI to create scalable tech solutions.
With over 7.400 CI&Ters around the world, we’ve built partnerships with more than 1,000 clients during our 30 years of history. Artificial Intelligence is our reality.

We are looking for a Senior Data Developer with strong knowledge in developing and maintaining data pipelines in Databricks, integrating and transforming data from various sources, such as APIs, relational databases, files, and many others!
As a senior member of the team, you will also provide technical guidance, mentor peers, and collaborate with cross-functional teams including data scientists, analysts, and platform engineers.

Position Overview:

As a Senior Data Engineer, you will lead the design and development of robust data pipelines, integrating and transforming data from diverse data sources such as APIs, relational databases, and files. Collaborating closely with business and analytics teams, you will ensure high-quality deliverables that meet the strategic needs of our organization. Your expertise will be pivotal in maintaining the quality, reliability, security and governance of the ingested data, therefore driving our mission of Collaboration, Innovation, & Transformation.

Key Responsibilities:

Develop and maintain data pipelines.
Integrate data from various sources (APIs, relational databases, files, etc.).
Collaborate with business and analytics teams to understand data requirements.
Ensure quality, reliability, security and governance of the ingested data.
Follow modern DataOps practices such as Code Versioning, Data Tests and CI/CD
Document processes and best practices in data engineering.

Required Skills and Qualifications:

Must-have Skills:
Proven experience in building and managing large-scale data pipelines in Databricks (PySpark, Delta Lake, SQL).
Strong programming skills in Python and SQL for data processing and transformation.
Deep understanding of ETL/ELT frameworks, data warehousing, and distributed data processing.
Hands-on experience with modern DataOps practices: version control (Git), CI/CD pipelines, automated testing, infrastructure-as-code.
Familiarity with cloud platforms (AWS, Azure, or GCP) and related data services.
Strong problem-solving skills with the ability to troubleshoot performance, scalability, and reliability issues.
Proficiency in Git.
Advanced English is essential.

Nice-to-have Skills:
Experience with data contracts, schema evolution, and ensuring compatibility across services.
Expertise in data quality frameworks (e.g., Great Expectations, Soda, dbt tests, or custom-built solutions).
Familiarity with DBT, Atlan and Soda.
Integration with Power BI
Experience with Data Streaming.


#LI-GV1
#Senior