Build Big Data Execution Environment (Spark and zeppelin).

  • Stato Chiuso
  • Budget $250 - $750 USD
  • Offerte totali 17

Descrizione del Progetto

TItle : Build Big Data Execution Environment (Spark and zeppelin).


I need to physical setup of bigdata platform for my data analysis work.

- spark(SparkSQL and MLlib,R) and Zeppelin.

I'm looking for expert(s) to configure a spark and Zeppelin on my Linux Server.

I'm not a actual jave programmer and don't have a experience of bigdata area.

So You need to provide followings.

- install and configure a bigdata platform

- Spark standalone mode.

- hadoop / spark(sparkSQl and ML ,SparkR) and Zeppelin

- Use a latest version of related SW.

- You need to install related programs or packages also on my Server .

- and I need to use Zeppelin Web U for exploring a data on SPARK.

- The goal for this project is to perform data analysis on SPARK over Zeppelin.

You need to give me followings :

- for sample program , use a scala program.

- All sample programs need to be running on Zeppelin UI or scala cli.

- pre-load a few sample data(for exampe , [url removed, login to view] )

- sample programs for data loading into spark. - Basically data loading from text file.

- sample program for Spark query and ML.

- Very basic scala program for "Collaborative filtering" method.

- a few SparkSQLs

- sample R program for utilizing of SparkR.

- and simple documentation.

I will let you know a linux server information if contracted.

this is a initial project

and if this is completed well , I'd like to extend a project to make a real business application.

Ottieni preventivi gratuiti per un progetto simile a questo
Assegnato a:

Hai voglia di guadagnare un po'?

  • Seleziona budget e durata
  • Delinea la tua proposta
  • Fatti pagare per il tuo lavoro

Assumi Freelancer che hanno fatto un'offerta per questo progetto

    • Forbes
    • The New York Times
    • Time
    • Wall Street Journal
    • Times Online