Wednesday, 28 February 2018

JOB DETAILS

Job Title: Big Data Developer (Scala/Spark)

Location: Philadelphia, PA

Duration: 18 month contract!

** Please see manager notes for Big Data:

Skills:

·         100% code in Scala- team writes from scratch

·         Advanced Scala and advanced spark needed

·         Need experience with Scala 2.11 or higher and Spark 2.1 or higher

·         Scala Z or CATS is a MUST - either is fine

·         Have to be very good with Spark

·         Good Hadoop experience

·         Kubernetes and Elastic Search is nice to have

·         Quintiles, Bloomsburg and BOA are a few companies that are doing the same thing this team is- might be able to find folks from there

Key Summary:

Job Description: Responsibility

• Designs, develops, and implements web-based Java applications to support business requirements.

• Follows approved life cycle methodologies, creates design documents, and performs program coding and testing.

• Resolves technical issues through debugging, research, and investigation. Relies on experience and judgment to plan and accomplish goals.

• Performs a variety of tasks. A degree of creativity and latitude is required. Typically reports to a supervisor or manager.

• Codes software applications to adhere to designs supporting internal business requirements or external customers.

• Standardizes the quality assurance procedure for software. Oversees testing and develops fixes.

• Contribute to the Design and develop high quality software for large scale Java/Spring Batch/Hadoop distributed systems by

• Loading and processing from disparate data sets using appropriate technologies including but not limited to, Hive, Pig, MapReduce, HBase, Spark, Storm and Kafka.

Skills

• Requires a bachelor's degree in area of specialty and 5 - 8 years of experience in the field or in a related area.

• Good knowledge of standard concepts, practices, and procedures within a particular field.

• 5-8 years of Java experience.

• Strong communication skills.

• Experience with Hbase, Kafka and spark.

• Expert in HIVE SQL and ANSI SQL - Great hands on in Data Analysis using SQL.

• Ability to write simple to complicated SQL in addition to having ability to comprehend and support data questions/analysis using already written existing complicated queries

• Familiarity in Dev/Ops (Puppet, Chef, Python)

• Good understanding of Big Data concepts and common components including YARN, Queues, Hive, Beeline, AtScale, Datameer, Kafka, and HDF.

Position Comments:       Must have very good Scala programming skills along with Spark and understanding of Docker, Kubernetes, Elastic Search

Additional Skills:               Scala, Spark, Docker, Kubernetes, Elastic Search





Thanks and Regards
Saransh Talwar
614-664-7645
saransh@technocraftsol.com

JOB DETAILS

Job Title: Big Data Developer (Scala/Spark)

Location: Philadelphia, PA

Duration: 18 month contract!

** Please see manager notes for Big Data:

Skills:

·         100% code in Scala- team writes from scratch

·         Advanced Scala and advanced spark needed

·         Need experience with Scala 2.11 or higher and Spark 2.1 or higher

·         Scala Z or CATS is a MUST - either is fine

·         Have to be very good with Spark

·         Good Hadoop experience

·         Kubernetes and Elastic Search is nice to have

·         Quintiles, Bloomsburg and BOA are a few companies that are doing the same thing this team is- might be able to find folks from there

Key Summary:

Job Description: Responsibility

• Designs, develops, and implements web-based Java applications to support business requirements.

• Follows approved life cycle methodologies, creates design documents, and performs program coding and testing.

• Resolves technical issues through debugging, research, and investigation. Relies on experience and judgment to plan and accomplish goals.

• Performs a variety of tasks. A degree of creativity and latitude is required. Typically reports to a supervisor or manager.

• Codes software applications to adhere to designs supporting internal business requirements or external customers.

• Standardizes the quality assurance procedure for software. Oversees testing and develops fixes.

• Contribute to the Design and develop high quality software for large scale Java/Spring Batch/Hadoop distributed systems by

• Loading and processing from disparate data sets using appropriate technologies including but not limited to, Hive, Pig, MapReduce, HBase, Spark, Storm and Kafka.

Skills

• Requires a bachelor's degree in area of specialty and 5 - 8 years of experience in the field or in a related area.

• Good knowledge of standard concepts, practices, and procedures within a particular field.

• 5-8 years of Java experience.

• Strong communication skills.

• Experience with Hbase, Kafka and spark.

• Expert in HIVE SQL and ANSI SQL - Great hands on in Data Analysis using SQL.

• Ability to write simple to complicated SQL in addition to having ability to comprehend and support data questions/analysis using already written existing complicated queries

• Familiarity in Dev/Ops (Puppet, Chef, Python)

• Good understanding of Big Data concepts and common components including YARN, Queues, Hive, Beeline, AtScale, Datameer, Kafka, and HDF.

Position Comments:       Must have very good Scala programming skills along with Spark and understanding of Docker, Kubernetes, Elastic Search

Additional Skills:               Scala, Spark, Docker, Kubernetes, Elastic Search





Thanks and Regards
Saransh Talwar
614-664-7645
saransh@technocraftsol.com

0 comments:

Post a Comment

Blog Archive

Contributors

GemSoft Tech Solutions. Powered by Blogger.

Recent Posts