Find Jobs
Hire Freelancers

Effective Mass Crawling using Apache Nutch storing data on HDFS - 27/04/2018 12:54 EDT

$30-250 USD

Chiuso
Pubblicato circa 6 anni fa

$30-250 USD

Pagato al completamento
Have to crawl the data and store it to HDFS using Apache nutch with the integration of Hadoop!
Rif. progetto: 16803276

Info sul progetto

6 proposte
Progetto a distanza
Attivo 6 anni fa

Hai voglia di guadagnare un po'?

I vantaggi delle offerte su Freelancer

Imposta il tuo budget e le scadenze
Fatti pagare per il lavoro svolto
Delinea la tua proposta
La registrazione e le offerte sui lavori sono gratuite
6 freelance hanno fatto un'offerta media di $244 USD
Avatar dell'utente
Hi, I have experience of setting up Nutch to store data in Hadoop/Hbase. Please let me know if you are interested and I am available to start right away.
$555 USD in 10 giorni
4,0 (17 valutazioni)
5,7
5,7
Avatar dell'utente
Hello sir, I have experinece in HDFS and Hadoop. for more info ping me. I did several project in this field.
$155 USD in 3 giorni
4,4 (5 valutazioni)
4,5
4,5
Avatar dell'utente
Hi, Expertise for crawl data using Hdfc and hadoop In regards of your job post, I would like to inform that we have skilled professionals team who have strong skills for it. In continuation, i request to drop your queries so we can go through it and provide you right solution. Look forward to hearing you. Thanks Ricky
$222 USD in 3 giorni
0,0 (1 valutazione)
0,0
0,0
Avatar dell'utente
Dear Client With more than 10 year experiences in Big Data Hadoop, I have following experiences with best projects... Implementation and ongoing administration of Hadoop infrastructure, Cluster maintenance as well as creation and removal of nodes,* HDFS support and maintenance.. Cluster Monitoring and Troubleshooting Design, implement and maintain security Works with application teams to install operating system and Hadoop updates, patches, version upgrades. Deploying a secure platform based on Kerberos authentication and apache ranger authorization, Including Hadoop, HBase, Kafka, spark, ambari ,Sqoop,Hortanworks,Cloudera VM ware,Elastic search,Cassadra. Automated deployment of spark, Flink clusters Spark, spark streaming, Flink, storm kernel Integrating Jupyter notebooks containing python, R, Scala for deployment in the spark, storm and flink environments Introduction of management of quotas, including HDFS, HBase, Kafka Machine Learning with TensorFlow - Build a solution which can recognize images on search words and can run on distributed computing like Hadoop/ Spark etc. for a photo storage company.. Setting up Eclipse project, maven dependencies to add required Map Reduce Libraries Coding, packaging and deploying project on hadoop cluster to understand how to deploy/ run map reduce on Hadoop Cluster Twitter Sentiment Analytics - Collect and real time data (JSON format), and perform sentiment analysis on continuously flowing streaming data Regards Suzan
$155 USD in 3 giorni
0,0 (0 valutazioni)
0,0
0,0

Info sul cliente

Bandiera: AZERBAIJAN
Azerbaijan
1,4
1
Metodo di pagamento verificato
Membro dal mar 1, 2018

Verifica del cliente

Grazie! Ti abbiamo inviato tramite email il link per richiedere il tuo bonus gratuito.
Non è stato possibile inviarti l'email. Riprova per piacere.
di utenti registrati di lavori pubblicati
Freelancer ® is a registered Trademark of Freelancer Technology Pty Limited (ACN 142 189 759)
Copyright © 2024 Freelancer Technology Pty Limited (ACN 142 189 759)
Caricamento anteprima
Autorizzazione per la geolocalizzazione concessa.
La tua sessione è scaduta ed è stato effettuato il log out. Accedi nuovamente per piacere.