Java code using Big Data (Apache): Kafka+storm+hadoop+MongoDB+AWS
I am Java and Linux Kernel developer. Setup includes centos,eclipse,maven,JDK7.
Now I am trying to add big data technologies to this setup.
I am trying elance for first time (instead of getting working setup from github or from text book or documentation).
I want to hire a passionate person who would help me in integrating big data technologies (I listed some technologies above to create multi broker kafka setup with bolts and sprouts; may add more).
1)maven setup for helloworld using kafka on my local PC
2)extension of 1 with storm and hadoop
3)extension of 2 with multiple brokers/partitions and multiple bolts/sprouts
4)add MongoDB to 3
5)add AWS to 4
6)extend above framework in AWS with financial data (that is replace helloworld with heavy traffic)
1 and 2 are must have with tight deadlines (prefer to start immediately because this is mostly setup/configuration). 3,4,5,6 are optional and have room for change of deadline.
I have some basic code setup for 1 (based on kafka webpage). We will start with step 1. Billing is based on each step.
Code documentation is not required; but detailed instructions to make 1 through 5 are necessary