Crawled results will filter out duplicates of top level domains (TLD).
So if there is a www. or just a TLD domain, we want to only use one.
1. Identify contact page. (most often [url removed, login to view] and/or [url removed, login to view], but can vary)
Scrape the contact name (if appears), the email addresses, and the telephone numbers.
This information should be saved in a Excel file.
Would love it if could also submit to wordpress sites, etc and perhaps support captcha api for that purpose. Lots of code on github/sourceforge for this.
2. Extract WHOIS Contact information and if possible, also on the websites Contact page. The only thing we want is the website, contact name, email address and telephone number of webmaster. The software will have an option to save the output according to the search engine it was crawled from in Excel, CVS or TXT.
3. The software should have a built in WYSIYG editor, and support multiple SMTP credentials and proxys for sending emails, and have the ability to do the scraping and task the emails immediately thereafter.
Menu Options of Software:
Define how many sites to crawl. Define tasks related to emails. There should be a menu to output success/error logs, which search engines to crawl (all are defaulted), which search engines to use (all are defaulted), ability to configure additional footprints, and produce output of results (compile database) in excel or CVS format.
Also, the menu should show how many proxies are working, and should randomly use them when extracting from search engines.
For example, if we selected "Google Site Search" and used "Google, Bing and Yahoo" to get the results, we should be able to create a database based on that.
By default, all search engines are used to find all the results for each 'site search' platform and duplicate domains are erased before following up with checking for website CONTACT and WHOIS info
Note: There is tons of open source for proxies, scraping, WHOIS lookups, etc and everything written here on Sourceforge and Github. So this is like a lego.
If you can create the Jonathan Siennicki software, please let me know the language (can be web, but preferably a binary application). Please give a price, time of delivery, and also any software resume you have to convince us your the right person for the job.
We've prepared a document that is attached with various footprints of the various platforms offering a similar service to Google Site Search.
12 freelancers are bidding on average $487 for this job
Hello. I am full experience with C++ C C# dotnet aspnet and windows desktop application development. You will be satisfied with my great result. I can implement google search crawler Best regards