Save this Search
Sort by:
  • Posted Date
Posted: Within 24 hours
Fixed Price: Less than $500   |  Posted: 5h, 11m ago  |  Ends: 14d, 18h  |   9 Proposals
We need to gather a list of contacts for Wastewater Treatment Plants in the United States. We need a spreadsheet with the following data: 1. Location of plant (City State) 2. Contact Name (should be plant manager or similar title) 3. Email 4. Phone Number Here is a potential starting point:   [obscured]  /ht/a/GetDocumentAction/id/51820 Please provide a quote for every 500 entries
Category: Email Marketing       

Sign in to view client's details.
| c****ook
|    United States
Hourly Rate: Not Sure   |  Duration: Not Sure  |  Posted: 5h, 24m ago  |  Ends: 14d, 18h  |   7 Proposals
hello i need someone to scrape amazon data using UPC codes. i have around 5500 skus . I have excel sheet attached in which you need fill out columns in RED using UPC code. Please contact me with your best offer and time. We need data on ASIN/Weight/BSR/BuyBox Price thanks
Category: Data Analysis       

Sign in to view client's details.
| i****ad5
|    Canada
Fixed Price: Not Sure   |  Posted: 10h, 5m ago  |  Ends: 14d, 13h  |   5 Proposals
Hello, We require an R programmer do develop a script that does the following: - Using the rvest package (or another method), scrape specific data from an html page (possibly utilizing getselector tool). - Output the data into a matrix - The website and data required will be provided upon request. The script/function must be able to iterate through a vector containing 14k+ different url's. One of this issues I encountered with the rvest package was dealing with missing node values. Ideally, any missing node values should output NA or the like into the matrix. Given the size of the data, the code needs to be tight and efficient on processing power, hence the need for an R expert. I am aware that R may not be the best solution for the scraping process. If you feel that Python would be more suited for this task, then we are open to your suggestions. The key point is that we need the data passed into a form that can be analyzed in R. If this project goes well, we have plans ...
Category: Statistics       
Skills: Statistics, HTML, Python, R       

Sign in to view client's details.
| c****cpa
|    United States
Hourly Rate: Not Sure   |  Duration: Not Sure  |  Posted: 14h, 16m ago  |  Ends: 14d, 9h  |   23 Proposals
I am looking a web scraper to get some websites scraped. Looking for somebody who can ensure timely delivery of results with no compromises on the quality. Please revert back if interested.
Category: Data Entry       

Sign in to view client's details.
| D****_92
|    India
Fixed Price: Not Sure   |  Posted: 20h, 58m ago  |  Ends: 14d, 3h  |   17 Proposals
I have unfinished project that need to be completed ASAP. It requires writing robots to scrape 3 websites: in search of cars with certain simple filtering (otomoto seems to be complete) The data is gathered to Postgres 9.4 db where it is filtered and displayed on simple dashboard written in ruby. On this part I need to add integration with service to send text messages to gathered numbers under certain conditions. Server is on digitalocean running ubuntu. I am looking for a skilled person and responsible person to handle this project in timely manner. I have already secured next bigger projects with client and I am looking for responsible and skilled person to help me deliver them. This project will also require monthly maintenance, so it is not a 1 time deal. Please post your offers along with estimated delivery time.
Category: Other IT & Programming       

Sign in to view client's details.
| t****app
|    Thailand
Fixed Price: Less than $500   |  Posted: 22h, 53m ago  |  Ends: 14d, 1h  |   14 Proposals
Hello, Looking for someone to program me Crawler and scraper to sites list. The Crawler should give the urls of all the web pages that have a recipe. In the Crawler should be an option for automatic run once a day / week (I have not decided but stay the possibility to change) and if this page has been scanned before do not add it to the scraper. The Scraper should go into each page and extract the recipe name, the URL of the photo of the recipe, preparation time, a list of ingredients that each ingredient in a separate column, Directions, and the URL of the recipe. I do not care what language it will be written, of course, it is better to be as automatic run as possible and in the end I will have a single database. list of the website:   [obscured]  /recipes.html   [obscured]     [obscured]     [obscured]  /recipe   [obscured]  /recipes   [obscured]  /recipes   [obscured]     [obscured]  /recipes/food-and-recipes.html   [obscured]  (...
Category: Other IT & Programming       
Skills: Data scraping, Web scraping, Web Crawling       

Sign in to view client's details.
| y****t02
|    Israel
Symbol Key
Payment method not yet verified
Payment verified
Purchased $1-$500
Purchased $500-$5,000
Purchased more than $5,000
You have already submitted a
proposal to this job