BigData/Machine Learning Platform Developer
Internship Santa Clara (Santa Clara) IT development
Job description
BigData/Machine Learning Platform Developer
Job Description:
We are looking for a big-data platform engineer to design and develop our big-data application infrastructure. This role involves development and support of the various big-data applications and frameworks on the BlueData EPIC platform that includes installation, configuration, and management of Hadoop/Spark job execution frameworks, distributed file systems, NoSQL databases, and SQL-on-Hadoop systems.
Desired Skills & Experience/Qualifications:
· BS, MS in Computer Science or equivalent
· Experience with large data sets and distributed computing
· Hands-on experience with the Hadoop stack (e.g. MapReduce, Sqoop, Pig, Hive, HBase, Flume)
· Experience in working on large linux clusters
· Hands-on experience with production Hadoop systems (e.g. administration, configuration management, monitoring, debugging, and performance tuning)
· Knowledge of NoSQL platforms (e.g. key-value stores)
· Hands-on experience with open source software platforms and languages (e.g. Java/Scala, Python)
· Familiarity with AI/ML frameworks
· Knowledge of cloud computing infrastructure (e.g. Amazon Web Services EC2, Elastic MapReduce)
· Knowledge the data warehousing and Business Intelligence systems
· Self-starter, fast learner, and the ability to work in a fast-paced environment
Job:
Engineering
Job Level:
Specialist
Hewlett Packard Enterprise is EEO F/M/Protected Veteran/ Individual with Disabilities.
HPE will comply with all applicable laws related to the use of arrest and conviction records, including the San Francisco Fair Chance Ordinance and similar laws and will consider for employment qualified applicants with criminal histories.