I want python code to
1) run existing script on web page (<[login to view URL]>)
2) read output (a list of url's) displayed on the web page in 1),
3) read text from each url in the list in 2) and write the text from each url into its own .txt file under a specified directory path.
(See more detailed description below.)
## Deliverables
When a wikipedia page url is pasted into the "wikipedia url" box on <[login to view URL]>, and the "whateva!" button is clicked, a number of url's are displayed.
For example, if the url<[login to view URL]> is pasted and the default options are used, about 7300 url's are displayed.
I would like python code that
1) takes inputs on<[login to view URL]>, as well as a directory path,
2) runs the script on<[login to view URL]> on the page in the "wikipedia url" box,
3) scrapes the text (text only) in the url's that are displayed in the page, putting the text from each web page into its own .txt file named [login to view URL], where Title is the title of the web page from which the text is taken.
For the example above, running this script would produce about 7300 appropriately named text files under the specified path.