Need a web spider scraping program written for the following username DFUSER11653

I need a web scraper written for the .xlsx file in the following directory:

[login to view URL]

The latest .xlsx file within that directory will need to be downloaded.

The name of the file is subject to change and will need to be identified by the latest .xlsx extension.

The output should be a pipe (|) delimited file with the following column mappings:

origin_city --> data located in column "C", if the column contains a comma and data after the comma only add data BEFORE the comma

origin_state --> data located in column "D"

ship_date --> the date from column "A" changed to the YYYY-MM-DD format

destination_city --> data located in column "F", if the column contains a comma and data after the comma only add data BEFORE the comma

destination_state --> data located in column "G"

receive_date --> leave blank

trailer_type --> the abbreviation located in column "B"

load_size --> data located in column "I"

weight --> data located in column "K"

length --> data located in column "J"

width --> leave blank

height --> leave blank

trip_miles --> leave blank

pay_rate --> data located in column "L"

contact_phone --> data located in column "O"

contact_name --> leave blank

tarp_required --> leave blank

comment --> data located in column "P"

load_number --> data located in column "Q"

commodity --> data located in column "M"

The first line of the output should contain all of the column headers.

Any field that contain no data should be left blank.

Please do not use words like "null" or "blank" in blank columns.

Below is a sample output of the first 5 columns using sample data:


chicago|IL|2017-03-15|new york|NY|

kansas city|MO|2017-03-15|houston|TX|

The deliverable will be a Perl .pl file that must run on

Ubuntu Linux and must use Modern::Perl. The Perl .pl file

should be called '[login to view URL]' and the output file should be

called '[login to view URL]'

It will be scheduled in cron to run unattended every 15 minutes.

Please specific what language/OS/modules you plan to use.

Also, please include the word "raccoon" in your bid so I know that

you read this description.

Skills: Linux, Perl, Web Scraping

See more: ai web scraping, octoparse, web crawler tool, web scraping api, web scraping, web crawler tutorial, web scraping tools open source, web crawler python, need web master, need web designer landing page, write web spider, clickbank need web site, web spider development, need web site, web spider source, web spider crawling website robot vbnet, software experience need web designer, need web developer, web spider collect data, email scraping program

About the Employer:
( 68 reviews ) Chillicothe, United States

Project ID: #18939924

Awarded to:


I can provide you same Perl scraping code as before (using WWW::Mechanize and other modules). I'll complete it in less than a day.

$88 USD in 1 day
(480 Reviews)

3 freelancers are bidding on average $143 for this job


Hi there, I am java developer with 7+ years of experience. I have strong expertise in web crawling, web scraping, website monitoring, b-o-ts and web automation. I have already made lots of scraper and monitor tools More

$230 USD in 3 days
(11 Reviews)

hi there , i am working on script written on .net application using c# . waiting for response regards tscreators

$111 USD in 3 days
(6 Reviews)