Your guide to getting data entry done for your business
Data entry is an important task, but choosing the wrong solution can seriously harm your company's productivity.
Data Extraction is the process of extracting data from a variety of sources for further analysis. A Data Extractor is someone who helps businesses and organizations gain insight from their data and create descriptive and predictive models. They specialize in finding patterns and relationships that guide decisions and uncover meaningful information. Through carefully crafted queries and processes, our Data Extractors can transform raw data into a useful format that can be used for reporting, analytics, machine learning and more.
Here's some projects that our expert Data Extractors made real:
When you partner with an experienced team of Freelancer's Data Extractors you can access valuable insights from your data that can guide decisions, uncover opportunities and create predictive models with new data sources. Our experts can help you unlock deeper insights with advanced filtering methods and complex coding. Explore the full range of possibilities with our talented community of professionals, capable of delivering comprehensive solutions tailored to your needs.
Ready to launch your very own project on Freelancer.com? We invite you to try us out and hire our experienced Data Extractors to make your design goals a reality. Let their creativity, skill, and proficiency bring something special to your project!
Da 125,962 valutazioni, i clienti danno una valutazione ai nostri Data Extractors di 4.9 stelle su 5Data Extraction is the process of extracting data from a variety of sources for further analysis. A Data Extractor is someone who helps businesses and organizations gain insight from their data and create descriptive and predictive models. They specialize in finding patterns and relationships that guide decisions and uncover meaningful information. Through carefully crafted queries and processes, our Data Extractors can transform raw data into a useful format that can be used for reporting, analytics, machine learning and more.
Here's some projects that our expert Data Extractors made real:
When you partner with an experienced team of Freelancer's Data Extractors you can access valuable insights from your data that can guide decisions, uncover opportunities and create predictive models with new data sources. Our experts can help you unlock deeper insights with advanced filtering methods and complex coding. Explore the full range of possibilities with our talented community of professionals, capable of delivering comprehensive solutions tailored to your needs.
Ready to launch your very own project on Freelancer.com? We invite you to try us out and hire our experienced Data Extractors to make your design goals a reality. Let their creativity, skill, and proficiency bring something special to your project!
Da 125,962 valutazioni, i clienti danno una valutazione ai nostri Data Extractors di 4.9 stelle su 5Título del proyecto Automatización RPA en DATAX: Facturación/causación desde Excel con arquitectura escalable (futuro OCR + “memoria” por proveedor) Contexto Somos una firma contable en Colombia. Usamos DATAX para registrar/causar y/o facturar. Queremos eliminar digitación manual con un bot RPA. Empezaremos con Excel → RPA → DATAX, pero el diseño debe quedar preparado para escalar a: carpeta con facturas (PDF/XML) → extracción de datos → reglas contables por proveedor → ejecución en DATAX. Objetivo Fase 1 (MVP en 3–5 semanas) Implementar un bot RPA que, a partir de un Excel maestro, cree/registre (según aplique) facturas/causaciones en DATAX y genere evidencia de ejecución. A...
I have three specific school-website links that list all current teachers and administrators. From each page I need a clean scrape of every staff member’s name, role, email address, plus the city/town and the school name, compiled into a single Excel workbook. Alongside that, I already hold an Excel sheet that contains a roster of Tow and roadside drivers. The sheet has their names and the URLs of the companies they work for, but no contact details. Please crawl those company sites, locate each driver’s email address, and append the results to the same workbook, using matching columns so everything stays consistent. Key points to keep in mind: • Final deliverable: one Excel file ready for copy-and-paste outreach. • Source material: my three school websites and...
We are looking for a person with extensive and up-to-date understanding of the Autodesk Corporation's proprietary DXF format, so we can extract a set of a given type of objects from a .DXF text file and translate it into a set of coordinates reflecting the locations of those objects, based upon the Adobe PDF standard, where (0,0) is to be found in the lower left corner. All DXF files translated would use only 2D (two dimensional) coordinates. The resultant dataset would be in a simple .CSV file format.
I need the entire contents of a specific website captured in a single pass. That means every piece of on-page text, all publicly visible image files, and every internal or external hyperlink. Once scraped, the information should be organised into a clean CSV file—one row per page—with columns for page URL, full body text, image file names, and link destinations. Please download the images themselves as well and bundle them in a separate folder (a simple ZIP is fine); the CSV should reference the exact filenames so everything lines up. I’m happy for you to use Python with BeautifulSoup, Scrapy, Selenium or whichever stack you prefer, as long as the final output meets these acceptance criteria: • Complete CSV containing text, image names, and link URLs for each ...
Fix the error: why did the page design change and the scraper no longer work?
I am looking for a Python developer to create a simple and focused scraper script for Facebook Marketplace. Project Idea: The script will open a single Facebook Marketplace seller page and: • Extract all product links belonging to that seller only • Ignore any other data (no names, no prices, no images) • The final output should be a list of links only • Each product link on a separate line (link under link) Exact Requirements: • Input: Facebook Marketplace seller page URL • Output: • A file containing all product URLs for that seller • File format: TXT or CSV • Handle infinite scrolling to load all products Technical Requirements: • Python • Selenium or Playwright • Experience with dynamic websites • Clean, ...
I have a set of voter-list PDFs released by the election commission. The layout across all files is identical, so positional parsing is reliable. Right now I simply need the current batch converted, but long-term I want a reusable Python utility that pulls the following six columns straight into Excel: • Name • FathersName • Age • Gender • VoterID • SerialNumber . Section Name . Polling Station Name .etc. Scope of work 1. Run the first extraction and hand me the .xlsx file so I can verify accuracy. 2. Package the underlying code (Python 3.x) with clear instructions and any so I can repeat the conversion on future lists without further help. Technical notes – Consistent layout means you can lean on libraries like pdfplumber, camelo...
I need every public phone number that appears on gathered into a single, well-structured Excel workbook. Please crawl the entire site, not just a few sections, and return each number alongside the key profile details that make the data usable at a glance—name, profile URL, and any other easily captured identifiers shown next to the number. A clean .xlsx with one row per profile, no duplicates, and clearly labelled columns is the only deliverable I’m expecting. If you prefer Python, Scrapy, Selenium, Beautiful Soup or a comparable stack, go ahead; I’m interested in results, not the specific toolset, as long as the script can be rerun later should the site content change. Before delivery, double-check that: • every row contains a valid phone number and url • n...
Quiero contar con un archivo .xlsx que contenga las 12 728 filas completas de la tabla pública que aparece en la web de INDECOPI (Perú). El sitio sólo muestra 10 registros por página, por lo que necesito que hagas la extracción automatizada y consolides todo en un solo libro de Excel. Campos requeridos • Nombre de la empresa • Nombre de la persona • Número de registro Todos los campos que aparecen publicados. Los entregarás en columnas con formato estándar, sin filtros, tablas dinámicas ni otras funcionalidades añadidas. Yo te facilitaré la URL exacta y los pasos de navegación para que ubiques la vista paginada. Una vez terminado, comprobaré que el total de filas coincida co...
I need a senior-level specialist to harvest product data from several e-commerce sites and deliver it in a single, well-structured CSV file. The task demands production-ready techniques—think Scrapy spiders hardened with rotating proxies, Selenium or Playwright for dynamic content, and solid anti-bot countermeasures. The information I’m after is very specific: product names, prices, pictures, and SKU. Nothing less, nothing more. Your solution must run reliably at scale, cope with frequent layout changes, and leave no trace that could trigger blocks. Python is the preferred stack, but if you have a proven alternative that meets the same bar, I’m open to hearing it. To be considered, include in your proposal: • At least one example of a comparable e-commerce scrapi...
PDF to Excel Data Scraper Needed Job Title: Data Scraper Needed: Convert 24 PDF Factsheets to Clean Excel (Mutual Fund Portfolios) Project Overview: I need a freelancer to extract detailed stock portfolio data from ~24 Mutual Fund Monthly Factsheets (PDFs). I will provide the URLs/Files. Your job is to extract the full stock holdings table for specific funds and deliver a consolidated, clean Excel/CSV file. The Goal: I need the complete list of stocks (100% of the portfolio), NOT just the Top 10. The data is used for financial backtesting, so accuracy is critical. Even top 85-90% data works. Scope of Work: Input: ~24 PDF Files (Monthly Factsheets). Target Funds: For each month, extract data for the Top 10 Equity Funds (e.g., Bluechip, Midcap, Smallcap, Value Discovery, etc. - list wi...
I need an expert who can read and understand CS Khatiyan online land records written in Kaithi/old handwriting You must extract and explain details of my plot number(s) that I will provide from online bihar bhumi website Work Includes: Identify plot/khata details Owner name + father name Caste/community mentioned Land area + land type/class Any remarks or tenancy/mutation notes Explain in simple Hindi/English Any other land in the same owner name Deliverable: Clear written summary/report with all details. Skills Required: CS Khatiyan reading, Kaithi script understanding, Bihar land record knowledge.
I have a single PDF that holds roughly 50 scanned business cards, all in English. I need every card transcribed into a clean Excel sheet so I can import the contacts into my CRM without manual re-typing. Here’s what has to come across from each card: • Name • Job title • Company name • Company logo (please paste the image into its own cell or include a link to the extracted file) • Direct phone number(s) • Email address • Physical address Lay everything out in a standard column format—one row per card, one clearly labeled column per data point. Accuracy is key, so I’ll spot-check against the PDF; inconsistencies or missing fields will need correcting before final hand-off. I’m fine with whatever method you prefer&m...
I need webscraping expert to scrape data and export to excel from Indiegogo. Details I need for the projects are: Title: Project title. Category: The category of the project based on Indiegogo categorization system. Category: The sub-category of the project based on Indiegogo categorization system. Close Date: Close data of the campaign. Open Date: Open date of the campaign. Currency: Currency used for collected funds. Funds Raised: The amounts of funds raised. Funds Raised Percent: The percent of funds raised from the targeted funds. Funding Target: The targeted amounts of funds by the campaign initiator to be collected. Country: Country in which the project is based. Publisher: The name of the campaign initiator. Backers: The number of people who decided to fund the campaign. Updates: ...
I’m looking for a well-structured Python solution, built around BeautifulSoup (BS4) and any supportive libraries you deem essential, that reliably pulls both product details and customer reviews from Lazada on a daily schedule. The data will fuel ongoing competitor research, so consistency and clarity of the output are critical. I looking specifically to get data using bs4 by bypassing the captcha Here’s how I picture the flow: • Input: category URL(s) or product list I supply in a CSV/JSON. • Scrape: title, price, promos, specs, images, ratings, full review texts, review dates, and reviewer scores. • Output: clean CSV or JSON dropped into a dated folder after each run. Make the script easy to tweak if Lazada changes its markup. Acceptance criteria 1. S...
PDF to Excel Data Scraper Needed Job Title: Data Scraper Needed: Convert 24 PDF Factsheets to Clean Excel (Mutual Fund Portfolios) Project Overview: I need a freelancer to extract detailed stock portfolio data from ~24 Mutual Fund Monthly Factsheets (PDFs). I will provide the URLs/Files. Your job is to extract the full stock holdings table for specific funds and deliver a consolidated, clean Excel/CSV file. The Goal: I need the complete list of stocks (100% of the portfolio), NOT just the Top 10. The data is used for financial backtesting, so accuracy is critical. Even top 85-90% data works. Scope of Work: Input: ~24 PDF Files (Monthly Factsheets). Target Funds: For each month, extract data for the Top 10 Equity Funds (e.g., Bluechip, Midcap, Smallcap, Value Discovery, etc. - list wi...
Preciso de um especialista em web scraping para coletar informações específicas consultando CPF em um site. Campos necessários: - Nome completo - Data de nascimento - Endereço - E-mails - Telefones - Veículo (marca/modelo) - Ano de fabricação - Ocupação - Faixa salarial - Provável empresa Habilidades e Experiência Ideais: - Experiência comprovada em web scraping - Proficiência em ferramentas como Python, Beautiful Soup, Scrapy, ou similares - Capacidade de trabalhar com estruturas de dados complexas - Atenção aos detalhes e precisão na extração de dados - Familiaridade com questões legais e éticas de scraping de ...
I have a data-analysis pipeline that relies on a steady flow of fresh product images from a well-known e-commerce site. What I need is a robust scraper that can navigate the catalog, collect every product’s main and variant images, and deliver them to me neatly organized. Key points you should know: • Target: a single e-commerce platform (URL supplied after award). • Payload: high-resolution image files plus a CSV/JSON map linking each file to product ID, title, price, and category text that you extract during the same run. • Scale: thousands of products per crawl; a resumable approach is essential so partial failures don’t force a full restart. • Frequency: I’ll trigger the crawl weekly, so reusable code is a must. I’m happy with Pytho...
I need to obtain hard-to-reach details—specifically the IP address, associated phone number, and any location-related information—linked to one particular Telegram account. Standard OSINT searches have already been exhausted, so I’m explicitly open to advanced, purely technical hacking techniques that dig directly into Telegram traffic or MTProto behaviour. If this is within your skill set, tell me how you would approach the task, which tools or exploits you prefer to leverage, and what minimal input you require from my side (e.g., username, recent message, session file). Deliverables • Verified current or last-seen IP address for the target account • Recovered phone number (or clear statement if technically impossible) • Any additional address or geo...
I need OpenClaw on my dedicated Mac with three core capabilities: Chrome automation: open websites, click elements, fill forms, extract structured snippets, and return results in WhatsApp. Coding/app workflows: generate code locally and optionally interact with web dev platforms when commanded. Deep research workflows: run multi-step web research, compare sources, and return concise findings with references. Security and reliability are mandatory: least privilege, approved-user-only WhatsApp commands, startup on boot, restart on crash, logs, and health check.
I need all data that starts from , walks through every brand, opens each handset page and captures the complete specification table exactly as shown. The end-product I expect is: • A clean JSON file data where every phone is an object containing every available field (model name, release date, dimensions, display, chipset, camera, battery—everything published on the spec sheet). Please make sure the scraper respects polite crawling rules, handles pagination and brand/model edge cases gracefully, and returns UTF-8 encoded text. If anything on the site requires minor waits or retries, can block your way. I will test JSON data and if validates proper data, the job is done.
I am looking for someone who has experience in document analysis and data extraction to develop themes out of the data.
There is around 20k reviews publically available, so I can't scroll endlesly but I need you to scrape it for me and put in the spreadsheet along with filters - 1 stars to 5 stars. The job is simple for a professional, so please be realistic with prices. Should you do this correct and fast, I will give you more leads to scrape. Thanks
There is around 20k reviews publically available, so I can't scroll endlesly but I need you to scrape it for me and put in the spreadsheet along with filters - 1 stars to 5 stars. The job is simple for a professional, so please be realistic with prices. Should you do this correct and fast, I will give you more leads to scrape. Thanks
I'm looking for a qualified freelancer to develop a bot that can navigate the Almaviva Egypt website just like a human would. The bot must be capable of completing three key tasks: - Filling out all necessary appointment-related information - Selecting the date and time of the appointment - Submitting the request for the appointment Considering the constraints of the website, I require a bot that can still function proficiently with a limited number of appointment slots. Moreover, it must be programmed to input login credentials. A crucial requirement is that it can bypass or solve captcha verifications, ensuring a smooth booking process. The essential skillset for this project comprises expertise in Python, as the bot should be developed in this language. Familiarity with web scra...
I have a folder full of supplier bills in PDF format and I need a clean, repeatable Python script that pulls everything of value out of them and drops it neatly into an Excel workbook. Here is what I expect: • The script must capture every text field that appears on each bill (invoice number, dates, vendor, totals and any other descriptors). • It should identify and export any tabular line-item sections so that quantities, descriptions and prices land in true Excel rows and columns—not as a single block of text. • Embedded images or logos also need to be saved out (ideally into a sub-folder) with a reference back to the originating invoice inside the Excel sheet. Python tools such as pdfplumber, PyPDF2, camelot, tabula-py, pandas and openpyxl are all fine; c...
I want to turn my existing catalogues and knowledge base into smart, intuitive WhatsApp agents. Whether you prefer OpenAI’s Agent Builder or an n8n flow is up to you—as long as the final bots handle automation for user support and information dissemination flawlessly. Users should be able to ask a question, receive the right document or answer instantly, and feel as if they are speaking with a well-trained human agent. Alongside the chat experience, I need an end-to-end AI pipeline that automatically extracts raw data from the web, aggregates and cleans it, performs analysis, and then publishes clear visualisations—including map views—so insights are always one step away. I’m comfortable with tools such as Python, Pandas, LangChain, Node, SQL, Power BI, Table...
Preciso construir um backend de inteligência artificial que leia, de forma autônoma, PDFs técnicos recorrentes — o material chegará quase sempre como textos e áudios convertidos em documento — identifique tabelas complexas, texto descritivo e especificações técnicas, grave tudo em um banco SQL bem estruturado e, por fim, disponibilize essas informações em uma conversa no WhatsApp. O fluxo ideal é simples: envio o PDF, o serviço o processa, normaliza cada campo no banco relacional e, quando eu fizer perguntas no WhatsApp, recebo respostas claras extraídas desses mesmos dados. Essenciais para o projeto: • Extração fiel de tabelas, parágrafos e campos de especific...
>Registrati o accedi per visualizzare i dettagli.
I am looking for a data entry specialist who has experience with hail maps. The project is small and simple. I am providing the sample data inside the attachment. Please look into the file. I know that this data is extracted from a hail trace map and it's free. But I don't know the map and don't know how to extract it. You need to show me this. Deliverable • Be able to extract geo targeted data selected from the map. • A video shown how to extract the exact data from the hail map. My budget is $20 for showing me this.
roject Explained Simply: Automated Fundamental Health Checker This project is a simple tool that checks a company’s financial health automatically. You enter a stock ticker (like TATASTEEL or SUNPHARMA), and the tool: Pulls key financial data such as Assets, Liabilities, Revenue, EBITDA, PAT, and OCI Calculates Total Equity by subtracting liabilities from assets Compares the Intrinsic Value with the current market price Gives a clear Buy / Hold / Sell signal The tool is not meant to be a complex valuation model. It’s a working MVP that shows end-to-end execution. The demo is simple: Enter the ticker Click run Watch the financial data populate See the final decision instantly Including OCI shows a deeper understanding that net profit alone doesn’t tell the ful...
I have a growing list of company names, and I need a small, reliable Python script that can: Automatically find each company’s career/jobs page where open positions are posted (pages may be built using HTML, JavaScript, or modern front-end frameworks) Navigate through all job listings, including: Pagination (page numbers, next/previous, etc.) “Load more” buttons Infinite scrolling Ability to fetch data from multiple pages (e.g., page 3, 4, or beyond) Apply job filters, especially location-based filtering, so that only job links for specific locations are collected Extract only individual job posting links after filters are applied Visit each job link and scrape complete job details, including: Job title Job description Location Employment type (if available) Department / ...
I need every bit of information currently stored in my Tally company—masters, vouchers, inventory, bank transactions, statutory ledgers, the lot—pulled out once and delivered in a clean, tabular Excel workbook. The extraction must be fully automated (TDL, ODBC, or any method you’re comfortable with) so I can rerun it later, but this engagement covers a single execution and hand-over. Deliverables • An Excel file where each dataset appears as a properly labeled table, with field names matching Tally, dates and numbers intact, ready for analysis or import elsewhere. We will provide Tally data file. Let me know which approach you prefer (TDL, ODBC, etc.) and how quickly you can turn the finished workbook around. Please also advise your working days and hours.
I need help transferring data from a small batch of Word documents—no more than five in total—into a clean, well-structured Excel workbook. All source files will be supplied in .docx format, and I’ll indicate exactly which fields, headings, and date or number formats I want in the spreadsheet. Here’s what the job involves: copying the required information, creating and naming columns as instructed, standardising dates and numeric values, and running a quick sweep to catch duplicates or obvious typos before you hand the file back. Accuracy matters more than speed; I will cross-check the sheet against the originals, so I’m looking for someone who is comfortable double-verifying their own work. Deliverables • One Excel file (.xlsx) containing all data fr...
I need an applied-AI and automation specialist who can look at my existing forensic accounting workflow, currently driven mostly in Excel alongside Word and Google Sheets—and build a repeatable, transparent pipeline that does the heavy lifting while still letting me apply professional judgment before anything goes out the door. Here is what happens today: I send small to mid-size companies a standardized list of financial and inventory reports that I need to perform an independent bank audit against their loan(s). These reports include year end, tax docs, inventory, insurance, banking docs, financial reconciliation reports, etc. I then upload all paperwork in BOX and reorganize everything for the next phase. Once captured, those figures get reconciled against a set of templated work...
1. Objective Develop a mandatory weekly vehicle checklist system fully operated via WhatsApp, with automated reminders, photo validation, cloud storage, and Excel tracking. Drivers must complete the checklist every Friday. The checklist is not considered completed until: All questionnaire fields are answered All required photos are uploaded The system validates everything 2. Input data (Excel – provided by client) An Excel file will be provided as the source of truth for vehicles and drivers. Sheet: VEHICLES_MASTER Each row = one vehicle Mandatory columns: VEHICLE_PLATE VEHICLE_TYPE → TRUCK or VAN DRIVER_NAME DRIVER_PHONE (WhatsApp number, international format) INTERNAL_ID (optional) DELEGATION (optional) This file determines: Who receives the WhatsApp messages...
Especialista en automatización de procesos en Windows (RPA) + actualización de datos en portales web con credenciales Descripción del proyecto Buscamos un/a especialista en automatización de procesos (RPA) en entorno Windows para implantar una solución que reduzca trabajo manual y errores en dos flujos clave de la empresa: Gestión automática de facturas recibidas por email: las facturas llegan como adjuntos (principalmente PDF) y deben terminar registradas correctamente en nuestro sistema de gestión instalado en Windows. Actualización automática de información en portales web de terceros: a partir de datos que genera nuestra aplicación interna, se debe reflejar el estado (por ejemplo, de vehículo...
FUNCTIONAL SPECIFICATION WhatsApp-Based Machine Photos & Document Management System Global Objective Use WhatsApp as the single input channel to automatically manage: General machine photos Machine identification plates Logistics documents (Delivery Notes / CMR / Transport Docs) Each image type must follow a separate, independent workflow, without mixing logic or data. FLOW 1 — GENERAL MACHINE PHOTOS Input Photos of machines sent via WhatsApp: Front, side, wheels, basket, display, etc. These images are not identification plates and not documents. System Logic The system must automatically detect a machine number visible in the image (painted number, sticker, marking). Example: 248 This detected number is used as the machine identifier. Cloud Storage If the folder d...
Please Read Carefully Before Applying It does not matter whether you consider yourself a “vibe coder” or a traditional software engineer we accept both here. What matters is whether you can make this system work reliably at scale. We operate a production scraper that processes 500+ leaderboard sites per hour. All sites we scrape are leaderboards, but no two sites are the same. This is not a basic scraper. What Makes This Scraper Different The leaderboards we scrape vary heavily in structure and behavior: Dynamic buttons, tabs, and switchers JavaScript-rendered content Hybrid navigation (UI interaction + background API calls) Tables, card layouts, podium layouts, or combinations of all three Masked usernames and inconsistent rank formats Different ordering of wager / prize data ...
PROJECT: AI-POWERED CONVERSATIONAL DOCUMENT CLOUD ACCESSIBLE VIA WHATSAPP 1. Overview The project consists of creating an intelligent document cloud, accessible primarily through WhatsApp, where users can ask anything related to the company (machines, documentation, spare parts, regulations, internal data, etc.), and the AI automatically returns the correct information, either as a text response or by delivering the exact PDF document required. This is not a traditional app and not a simple chatbot. It is the living memory of the company, organized in the cloud and accessed conversationally. 2. Entry Point: WhatsApp WhatsApp is the only access channel. The phone number identifies the user. There are no usernames or passwords. The system automatically recognizes: The phone number T...
I will supply several PDFs containing mixed text and numeric information, and I need every line transferred accurately into Excel within three days. The final workbook should be organized across multiple sheets rather than a single master tab. While the source files do not specify sheet titles, I’m open to your suggestions—please propose a clear, logical naming convention that makes navigation effortless. Accuracy is the top priority: totals must match the originals, text must be copied exactly, and no rows can be skipped. Once complete, return the finished .xlsx file plus any notes that explain your chosen sheet names or highlight ambiguous entries you want me to double-check.
I need a reliable specialist who can log into our dealership’s backend every weekday, pull fresh customer information, and feed it straight into our call-tracking platform the same day. The only data I’m after are contact details and service records—nothing else—so the extraction script or manual process can stay laser-focused on those two fields for speed and accuracy. Turnaround is critical. If you can set this up and have the first full export/import cycle running smoothly right away, I’m happy to add a rush bonus on top of the agreed rate. Accuracy must be spot-on and the data has to land in the tracking system without duplicates or formatting hiccups. Deliverables each weekday: • Clean export of new customer contact details and service record...
I have a spreadsheet with 200 U S-based websites and I need the direct phone numbers of each owner. The numbers are not published on the sites themselves, so please pull them through your own account. Alongside every number, include the owner’s LinkedIn profile URL; no other fields are required. What I expect from you • A clean CSV or Google Sheet with three columns: Website, Owner Phone Number, LinkedIn Profile • Accuracy checked against Apollo’s latest data • Completion within 24 hours of project acceptance This is a quick job for an experienced user. I will review the sheet immediately and release payment within 24 hours once the data is verified.
I need a developer to collect data from multiple public websites and deliver it in a clean, structured format. This is for legitimate data extraction from publicly available pages. I will share the target URLs and exact data fields with shortlisted candidates. Scope of work Scrape data from multiple public websites (details shared after shortlisting) Extract specific fields consistently and handle pagination/filtering where needed Normalize/clean the data (remove duplicates, consistent formatting) Export results to CSV/Excel/JSON (format to be confirmed) Provide a repeatable solution (script or small app) that I can run on demand Basic documentation: how to run it, how to adjust settings, where outputs go Quality requirements Reliable scraping with error handling and retries Resp...
I need a concise technical blueprint that shows exactly how to print in Cityworks environment at through SQL Server Management Studio. The goal is to make sure a field on the form "DUP. BAD CHECK “ L-If a Bad Check fee (L-BADCHECK) is attached to a case, it's owed. Status, type, etc. do not matter. Same for L-DUPLCTE. L-DUPBC - this is either or both a bad check fee and a duplicate fee from historic records (Pre-migration) 30 - BC 20 - Dupe 50 - Both (Rare, if it exists) the Cityworks front-end application, then landing it in the back-end SQL database on a batch schedule. You’ll be working with: • Cityworks PLL (latest build) • Microsoft SQL Server 2019 + SSMS What I expect from you: • A clear, step-by-step document that maps Citywor...
Necesito crear un flujo automatizado que recolecte información desde bases de datos académicas sobre un tema específico, valide su exactitud por comparación de fuentes múltiples y la guarde en una base SQL estructurada. Alcance • Desarrollar un script en Python que rastree, extraiga y normalice datos académicos. • Validar cada registro con cruces automáticos entre al menos dos fuentes para reducir errores. • Cargar los resultados –texto y números– en tablas claras, con claves primarias y relaciones bien definidas. • Generar reportes de control de calidad en Excel o Google Sheets que muestren discrepancias y registros pendientes de revisión. • Implementar un mecanismo de actualizaci&...
Several hundred rows of records are currently trapped in a mix of PDF documents and image files. I need every line transferred into a clean, well-structured Excel workbook within the next 2–3 days. Here is what the job entails: • Transcribe all data from the supplied PDFs and images into a single Excel file, keeping every field in its correct column. • Remove any duplicate rows, fix alignment or date/number inconsistencies, and delete entries that are clearly incomplete. • Apply quick, readable formatting—bold headers for every column and light color-coding to highlight key sections or totals—so the sheet is immediately understandable. • Insert basic formulas where relevant: SUM ranges for totals, IF statements for simple checks, and VLOOKUPs t...
I need a clean pull of every location listed on For each branch please capture: country, state, complete address, service type, phone number, and email address. The final deliverable is a single Microsoft Excel workbook containing one sheet only. All columns should be clearly labelled and the range converted to an official Excel Table so I can apply native filters instantly. No additional filtering is required on your side; just be sure the table structure supports easy filtering by any column once I open the file. Accuracy matters more than speed—every location on the site has to be included and the contact details must match what is shown online. When you hand over the file I will spot-check a sample of entries against the live site to confirm completeness and correctness bef...
I need Octoparse templates built for roughly fifty manufacturer sites in the flooring & renovation niche. Each template must crawl the full product catalog and push clean, structured data into my Supabase database. The extraction scope includes: high-quality images, complete text descriptions and feature lists, links to warranty documents or other disclosures, detailed dimensions and specifications, style and color information, collection / color-family, and every SKU shown on the page. Price data is nice-to-have when present, but its absence should not break the run. Many product pages list matching accessories (trim, transitions, quarter-round, etc.). Your logic must identify those by shared style and color so they enter the database as related items. Typical sites you will start ...
I’m ready to hand over three full years of PDF statements for five separate bank and credit-card accounts and need every single transaction moved into Excel. You’ll be working with 2022, 2023 and 2024 files; each account should end up with its own workbook tab for each year so I can switch between them quickly. I’ll supply a sample sheet that shows the custom headers and layout I want—date, description, amount, balance, plus a few extra columns for categories and notes. Please keep that structure identical across all tabs. During extraction you’ll also need to clean the data: normalize dates, strip out blank rows, fix any OCR quirks, and make sure credits and debits sit in the correct signed columns. In short, I want a spreadsheet that’s ready for pivot...
Data entry is an important task, but choosing the wrong solution can seriously harm your company's productivity.
Learn how to hire and collaborate with a freelance Typeform Specialist to create impactful forms for your business.
A complete guide to finding, hiring, and working with a skilled freelance typist for your typing projects.