Title: Hadoop Admin
Company: Global Atlantic
Global Atlantic Financial Group, through its subsidiaries, offers a broad range of retirement, life and reinsurance products designed to help our customers address financial challenges with confidence. A variety of options help Americans customize a strategy to fulfill their protection, accumulation, income, wealth transfer and end-of-life needs. In
Location: Brighton, MA
Team: Enterprise Information...
Title: Hadoop Admin
Company: Global Atlantic
Location: Brighton, MA
Team: Enterprise Information Management/Database Operations & Support
Team Size: manager plus 2-3 admins, team of architects and developers
Why Open: Set up new environments
Start: 2/26/2018
Duration: 6 months, may extend
Interview Process: phone and video interview
Important Skills
Technical – Hadoop Admin, Hortonworks, AWS
Soft – communication skills, able to self-start and work with minimal supervision
Must Haves:
Need someone to help them to administer AWS environment; write cloud formation scripts and perform configuration management.
They will also need to help set up Hadoop clusters with an understanding of architecture
Need to point of contact for Amazon/AWS to deal with budgeting, adding/removing space and features, etc.
Day to Day:
Manage large scale Hadoop cluster environments, handling all Hadoop environment builds, including design, capacity planning, cluster setup, security, performance tuning and ongoing monitoring.
Evaluate and recommend systems software and hardware for the enterprise system including capacity modeling.
Contribute to the evolving architecture of our storage service to meet changing requirements for scaling, reliability, performance, manageability, and price.
Automate deployment and operation of the big data infrastructure. Manage, deploy and configure infrastructure with automation toolsets.
Working with data delivery teams to setup new Hadoop users. This job includes setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pig and MapReduce access for the new users.
Responsible for implementation and ongoing administration of Hadoop infrastructure.
Identify hardware and software technical problems, storage and/or related system malfunctions.
Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality and availability.
Expand and maintain our Hadoop environments (MapR distro, HBase, Hive, Yarn, Zookeeper, Oozie, Spyglass, etc.) and Apache stack environments (Java, Spark/Scala, Kafka, Elasticsearch, Drill, Kylin, etc.)
Collaborating with application teams to install operating system and Hadoop updates, patches, version upgrades when required.
Work closely with Technology Leadership, Product Managers, and Reporting Team for understanding the functional and system requirements
Expertise in Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Amazon web services, and other tools.
Excellent troubleshooting and problem-solving abilities.
Facilitate POCs for new related tech versions, toolsets, and solutions both built and bought to prove viability for given business cases
Maintain user accounts, access requests, node configurations/buildout/teardown, cluster maintenance, log files, file systems, patches, upgrades, alerts, monitoring, HA, etc.
Manage and review Hadoop log files.
File system management and monitoring.
Qualifications:
10 years of experience with 4 years as a DBA (Any database ) or Unix Admin
2 years of application database programming experience.
5 years of professional experience working with Hadoop technology stack.
5 years of proven experience in AWS and Horton Works Distribution.
Prior experience with performance tuning, capacity planning, and workload mapping.
Expert experience with at least one of the following languages; Python, Unix Shell scripting.
A deep understanding of Hadoop design principals, cluster connectivity, security and the factors that affect distributed system performance.
Regards,
Ashish Tyagi
United Software Group Inc..
565 Metro Place South. Suite # 110
Dublin, OH 43017
Phone: 614-495-9222, Ext 474
Fax: 1-866-764-1148
ashish.t@usgrpinc.com
www.usgrpinc.com
Company: Global Atlantic
Global Atlantic Financial Group, through its subsidiaries, offers a broad range of retirement, life and reinsurance products designed to help our customers address financial challenges with confidence. A variety of options help Americans customize a strategy to fulfill their protection, accumulation, income, wealth transfer and end-of-life needs. In
Team: Enterprise Information Management/Database Operations & Support
Team Size: manager plus 2-3 admins, team of architects and developers
Why Open: Set up new environments
Start: 2/26/2018
Duration: 6 months, may extend
Interview Process: phone and video interview
Important Skills
Technical – Hadoop Admin, Hortonworks, AWS
Soft – communication skills, able to self-start and work with minimal supervision
Must Haves:
Need someone to help them to administer AWS environment; write cloud formation scripts and perform configuration management.
They will also need to help set up Hadoop clusters with an understanding of architecture
Need to point of contact for Amazon/AWS to deal with budgeting, adding/removing space and features, etc.
Day to Day:
Responsible for the build out, day-to-day management, and support of Big Data clusters based on Hadoop and other technologies, on-premises and in cloud. Responsible for cluster availability.
Responsible for implementation and support of the Enterprise Hadoop environment. * Involves designing, capacity planning, cluster set up, monitoring, structure planning, scaling and administration of Hadoop environment (YARN, MapReduce, HDFS, HBase, Zookeeper, * Storm, Kafka, Spark, Pig and Hive)
Evaluate and recommend systems software and hardware for the enterprise system including capacity modeling.
Contribute to the evolving architecture of our storage service to meet changing requirements for scaling, reliability, performance, manageability, and price.
Automate deployment and operation of the big data infrastructure. Manage, deploy and configure infrastructure with automation toolsets.
Working with data delivery teams to setup new Hadoop users. This job includes setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pig and MapReduce access for the new users.
Responsible for implementation and ongoing administration of Hadoop infrastructure.
Identify hardware and software technical problems, storage and/or related system malfunctions.
Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality and availability.
Expand and maintain our Hadoop environments (MapR distro, HBase, Hive, Yarn, Zookeeper, Oozie, Spyglass, etc.) and Apache stack environments (Java, Spark/Scala, Kafka, Elasticsearch, Drill, Kylin, etc.)
Collaborating with application teams to install operating system and Hadoop updates, patches, version upgrades when required.
Work closely with Technology Leadership, Product Managers, and Reporting Team for understanding the functional and system requirements
Expertise in Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Amazon web services, and other tools.
Excellent troubleshooting and problem-solving abilities.
Facilitate POCs for new related tech versions, toolsets, and solutions both built and bought to prove viability for given business cases
Maintain user accounts, access requests, node configurations/buildout/teardown, cluster maintenance, log files, file systems, patches, upgrades, alerts, monitoring, HA, etc.
Manage and review Hadoop log files.
File system management and monitoring.
Qualifications:
10 years of experience with 4 years as a DBA (Any database ) or Unix Admin
2 years of application database programming experience.
5 years of professional experience working with Hadoop technology stack.
5 years of proven experience in AWS and Horton Works Distribution.
Prior experience with performance tuning, capacity planning, and workload mapping.
Expert experience with at least one of the following languages; Python, Unix Shell scripting.
A deep understanding of Hadoop design principals, cluster connectivity, security and the factors that affect distributed system performance.
Regards,
Ashish Tyagi
United Software Group Inc..
565 Metro Place South. Suite # 110
Dublin, OH 43017
Phone: 614-495-9222, Ext 474
Fax: 1-866-764-1148
ashish.t@usgrpinc.com
www.usgrpinc.com
0 comments:
Post a Comment