*Hadoop Admin – Architect ( HDP - Hortonworks )*
Duration: 3 weeks
Location: Los Angeles, CA
We are looking for an experienced Architect who can lead this new customer to install HDP, HDF and perform some ingest. This engagement must start on March 19 for 3 weeks onsite in Los Angeles.
*Architectural Review *
• Determine software layout to each server
• Discuss sizing and data requirements
• Discuss key points that will dictate software component...
*Hadoop Admin – Architect ( HDP - Hortonworks )*
Duration: 3 weeks
Location: Los Angeles, CA
We are looking for an experienced Architect who can lead this new customer to install HDP, HDF and perform some ingest. This engagement must start on March 19 for 3 weeks onsite in Los Angeles.
*Architectural Review *
• Determine software layout to each server
• Discuss sizing and data requirements
• Discuss key points that will dictate software component selection and installation
• Review and validate the installation platform: physical, cloud, or hybrid
• Review and validate appropriate cluster sizing and capacity based on anticipated data ingestion volume and rate
HDP Installation
• Pre-installation
o Determine installation type
o Validate environment readiness o Install Ambari agents
• HDP Installation and Deployment
o Use Ambari to deploy to the agreed-upon architecture up to 24 nodes (based the number of nodes under an active support subscription)
o Ensure smoke test worked on each subsystem o Shutdown and restart all services
• HDP Cluster Tuning
o Configure cluster parameters based on hardware specification o Run benchmarking tools for establishing an initial baseline
• High Availability
o Configure HA Name Node
o Configure HA Resource Manager
o Test HA Name Node and HA Resource Manager
• HDP Configurations Backup
o Backup important site xml les
o Backup Ambari server configurations
o Backup Ambari agent configurations
o Backup default Databases used by the cluster o Backup Hive meta store
HDF Installation
• Install and configure HDF software components based on the number of active HDF support subscriptions
• Configure HDF cluster tuning parameters based on best practices
HDF Operational Production Enablement
• Enable HDF service monitoring and alerting mechanism capabilities
• Enable HDF service components high-availability features
Data Flow Use Case
Deliver one (1) of the following Use Cases (selection of Use Case to be agreed upon between Hortonworks and the customer):
• Log File Ingestion: This use case includes the ingestion of up to five individual log files into your pre-existing Hadoop cluster.
• Splunk Optimization: This use case includes the application of pre-filtering logic to the content of two log files before it is streamed into Splunk. The filtering logic will only allow critical data to be
forwarded into Splunk. All other non-critical data will be stored on the pre-existing Hadoop cluster. All filtering rules must be pre-approved by Hortonworks prior to the start of this work.
• Streaming Analytics Feed: Accelerate big data ROI by streaming data (up to 3 files) into analytics systems such as Apache Storm or Apache Spark Streaming
Knowledge Transfer
• Conduct a half-day session to conclude the engagement including the following:
o HDP and HDF overview
o Demonstration of sample data flow
• Complete engagement summary document detailing activities performed anddelivered
• Project hand-off with Hortonworks technical support team scheduled and conducted
Outcomes
Achieve the following outcomes as part of the three-week engagement:
• One fully operational HDP cluster on the deployment platform of customer’s choice • One fully operational HDF cluster on the deployment platform of customer’s choice
• One data flow use case using HDF
*Thanks & Regards,*
*Aravind*
*MSR COSMOS*
5250 Claremont Ave, Ste 249 | Stockton, CA – 95207
*Desk : 925-399-7145*
*Fax : 925-219-0934 *
*Email *: aravind@msrcosmos.com
*URL*: http://www.msrcosmos.com
Duration: 3 weeks
Location: Los Angeles, CA
We are looking for an experienced Architect who can lead this new customer to install HDP, HDF and perform some ingest. This engagement must start on March 19 for 3 weeks onsite in Los Angeles.
*Architectural Review *
• Determine software layout to each server
• Discuss sizing and data requirements
• Discuss key points that will dictate software component selection and installation
• Review and validate the installation platform: physical, cloud, or hybrid
• Review and validate appropriate cluster sizing and capacity based on anticipated data ingestion volume and rate
HDP Installation
• Pre-installation
o Determine installation type
o Validate environment readiness o Install Ambari agents
• HDP Installation and Deployment
o Use Ambari to deploy to the agreed-upon architecture up to 24 nodes (based the number of nodes under an active support subscription)
o Ensure smoke test worked on each subsystem o Shutdown and restart all services
• HDP Cluster Tuning
o Configure cluster parameters based on hardware specification o Run benchmarking tools for establishing an initial baseline
• High Availability
o Configure HA Name Node
o Configure HA Resource Manager
o Test HA Name Node and HA Resource Manager
• HDP Configurations Backup
o Backup important site xml les
o Backup Ambari server configurations
o Backup Ambari agent configurations
o Backup default Databases used by the cluster o Backup Hive meta store
HDF Installation
• Install and configure HDF software components based on the number of active HDF support subscriptions
• Configure HDF cluster tuning parameters based on best practices
HDF Operational Production Enablement
• Enable HDF service monitoring and alerting mechanism capabilities
• Enable HDF service components high-availability features
Data Flow Use Case
Deliver one (1) of the following Use Cases (selection of Use Case to be agreed upon between Hortonworks and the customer):
• Log File Ingestion: This use case includes the ingestion of up to five individual log files into your pre-existing Hadoop cluster.
• Splunk Optimization: This use case includes the application of pre-filtering logic to the content of two log files before it is streamed into Splunk. The filtering logic will only allow critical data to be
forwarded into Splunk. All other non-critical data will be stored on the pre-existing Hadoop cluster. All filtering rules must be pre-approved by Hortonworks prior to the start of this work.
• Streaming Analytics Feed: Accelerate big data ROI by streaming data (up to 3 files) into analytics systems such as Apache Storm or Apache Spark Streaming
Knowledge Transfer
• Conduct a half-day session to conclude the engagement including the following:
o HDP and HDF overview
o Demonstration of sample data flow
• Complete engagement summary document detailing activities performed anddelivered
• Project hand-off with Hortonworks technical support team scheduled and conducted
Outcomes
Achieve the following outcomes as part of the three-week engagement:
• One fully operational HDP cluster on the deployment platform of customer’s choice • One fully operational HDF cluster on the deployment platform of customer’s choice
• One data flow use case using HDF
*Thanks & Regards,*
*Aravind*
*MSR COSMOS*
5250 Claremont Ave, Ste 249 | Stockton, CA – 95207
*Desk : 925-399-7145*
*Fax : 925-219-0934 *
*Email *: aravind@msrcosmos.com
*URL*: http://www.msrcosmos.com
0 comments:
Post a Comment