Data Engineer (Scala + Spark)
Location: Fully remote, Poland or Ukraine
On behalf of Adthena, Efisco is looking for a Data Engineer with deep knowledge of Scala and Spark to join the fully remote team on a full-time basis. We are looking for a good team player who is able to add, knowledge, and experience to Adthens’s Data Engineering team, has a passion for complicated solutions.
Adthena’s mission is a world of search transparency where precise ads connect marketers to consumers. This statement is key to our ethos here at Adthena and is backed by our Whole Market View technology, a dynamic, AI-driven, data model that is unique for each advertiser, representing their entire relevant search landscape.
Powered by its patented machine learning technology, Whole Market View provides the comprehensive data scope and quality required by the world’s leading advertisers to precisely assess competitive opportunities at scale across their entire market, without limitations. Adthena indexes information hourly, processing over 10TB of new data, 500 million adverts and 200 million keywords across 15 different languages each day.
Things that make Adthena Unique
- Machine-Learned Whole Market View
- Supervised and Unsupervised Learning
- Convolutional Neural Networks
- Natural Language Processing
- Word Vector Embeddings (Word2Vec)
- Built for Client Value and Outcomes
- World Class Customer Success
Reasons why you should join the Adthena Data Engineering team @ Efisco
- Startup Engineering culture
- Good work/life integration
- Your work is seen and touchable
- Your input is heard and implemented
- Vacations: Annual leave + Christmas week
- Paid sick-leaves
- Individual coaching programs
- Monthly Hackdays
- Monthly Socials and company-wide retreats (when Pandemic restrictions become easier)
- Free Trainers when you join our team
- Social activities to join in.
- Huge training base
Adthena’s Data Engineering Team
Athena’s data team is a market leader in developing complex ETL and machine learning solutions. With published authors and award winning data scientists who contribute some of the major machine learning and distributed data technologies such as Apache Spark, we are a friendly, passionate group of engineers making a career out of building great software for our customers.
As a Data Engineer, you will be working across our entire stack, so a real passion to drive the product and technology forward is something that we value. Your responsibilities will include helping with a vision for the future architecture of this complex data system, adding innovative ideas that use the latest cutting edge technology. You will work closely with Web and Data Science teams to deliver user-centric solutions to our customers and become an expert in developing high-quality technical solutions.
- Build services/features/libraries that serve as definitive examples for new engineers and makes major contributions to library code or core services
- Design low-risk Spark process and write effective complex Spark jobs (data processes, aggregations, pipeline)
- Design low-risk APIs and write complex asynchronous, highly parallel low latency APIs and processes
- Work as part of Agile team to maintain, improve, monitor Adthena’s data collection processes using Java and Scala
- Write high quality, extensible and testable code by applying good engineering practices (TDD, SOLID) using Adthena’s Engineering Practices
- Understand and apply modern technologies, data structures, and design patterns to solve real problems efficiently
- Understand the Adthena’s data architecture and uses appropriate design patterns and designs complex database tables
- Support TA and Data Science team to help deliver and productionise their backlog/prototypes
- Take ownership and pride in the products we build and always make sure they are of the highest standard
- Be empathetic towards team members and customers
- Bachelor’s degree in Computer Science, similar technical field of study or equivalent practical experience.
- Commercial experience developing Spark Jobs using Scala
- Commercial experience using Java and Scala (Python nice to have)
- Experience in data processing using traditional and distributed systems (Hadoop, Spark, AWS – S3)
- Experience designing data models and data warehouses.
- Experience in SQL, NoSQL database management systems (PostgreSQL and Cassandra)
- Commercial experience using messaging technologies (RabbitMQ, Kafka)
- Experience using orchestration software (Chef, Puppet, Ansible, Salt)
- Confident with building complex ETL workflows (Luigi, Airflow)
- Good knowledge of working cloud technologies (AWS)
- Good knowledge using monitoring software (ELK stack)
- Motivated problem-solving skills, ability to bring ideas forward and adapt solutions to complex challenges
- Excellent oral and written English
Tech stack used in the Data Engineering Team
If you are interested, please leave your detailed CV at email@example.com or fill in the form below.