Description de l'offre
The Amazon Business team is building a development team to disrupt the way businesses institutions purchase their supplies.
Our team is focused on building solutions to enable business customers to research, discover and buy products that meet their specific business needs, whether they be common office supplies or specialized scientific products. Our customers include individual professionals, businesses and large institutions that buy in either high frequency or in bulk quantities. Our customers have different needs than the traditional Amazon customer base… and we are here to solve this.
As a Data Engineer you will be working closely with our tech teams to develop and supporting data pipelines that feed features being used by large sets of customers. You will be responsible for designed and implementing solutions using a variety of tools and technologies. You will interact with tech and product teams to gather requirements and build out complex data pipelines and solutions. You will provide guidance and support for other engineers with industry best practices and direction.
Go ahead and join us. Amazon is growing extremely fast, … our team is growing even faster!
· Interfacing with tech and product team members to gather requirements and develop new datasets in Redshift and EMR environments.
· Designing, implementing and supporting production pipelines that are surfaced to customers globally.
· Leverage diverse data platforms and tools to create production-level pipelines.
· Extracting and combining data from various heterogeneous data sources.
· Tuning application and query performance using Unix profiling tools and SQL.
· Bachelor's degree or higher in a technical/quantitative field (e.g. Computer Science, Engineering)
· 5+ years of relevant experience in one of the following areas: data engineering, database engineering or business intelligence
· 3+ years of hands-on experience in writing complex, highly-optimized SQL queries across large data sets
· 2+ years of experience in scripting languages like Python etc.
· Demonstrated strength in data modeling, ETL development, and data warehousing.
· Experience with Redshift
· Experience with AWS services including S3, Redshift, EMR and RDS
· Experience with Big Data Technologies (Hadoop, Hive, Hbase, Pig, Spark, etc.)
· Experience in working and delivering end-to-end projects independently
· Knowledge of distributed systems as it pertains to data storage and computing