Golang Colly Scrapper
Price is negotiable
Project Description:
Develop a high-performance crawler capable of traversing millions of domains quickly, extracting links and navigating up to a specified depth for each domain. Speed is essential.
Technologies:
- Programming Language: Golang, with no specific preference for libraries as long as they are among the fastest and best suited for the project's needs.
- Database: Redis, used as a queue to temporarily store the results of each crawled page in JSON format.
Key Features:
- Retrieve a list of domains via an API returning domain names in JSON format. The frequency of checking for new domains available should be configurable (in minutes or seconds).
- For each domain, extract links (CSS, JS, FORM, LINK, JSON) up to a specified crawl depth and a maximum number of pages per host. Configurations related to depth and maximum pages should be easily adjustable in a configuration file.
- Identify for each host its IP address, TLD (e.g., .com), the port used (e.g., 80), and for each page, capture the source, URL, navigation depth, and presumed file type (based on content-type or extension).
- Manage a blacklist for extensions and content-types, avoid duplicates through URL normalization, and ensure that the source code size of pages does not exceed a predefined limit (configurable).
- Save crawl results to Redis in JSON format in batches, ready to be integrated into Elasticsearch. The number of documents per batch should be configurable.
- The configuration of the number of workers should be automatic based on server capacity but adjustable via a configuration file for maximum flexibility. The number of requests per domain should be limited to avoid DDOS.
- Plan rigorous memory leak management to ensure long-term performance.
- A logging and performance monitoring system using Prometheus.
- Handling of redirects, crawling both http and https. If http://amazon.com does not work or redirects, try https://amazon.com; otherwise, skip.
- Ignore the robots.txt.
Other Requirements:
- The HTTP response header must be accessible at the beginning of the source code for easy data processing.
- Ensure clear documentation on how to modify configurations for easy and rapid adjustments as needed.
- For each page, make an md5 hash of the source code after removing spaces.
- Retrieve links from these specified attributes and tags.
List of attributes to handle:
- href
- src
- data-src
- data-href
- data-url
- action
List of tags to handle:
- a
- link
- script
- img
- iframe
- frame
- source
- form
Blacklist extensions:
- .mp3
- .avi
- .mp4
- .ogg
- .zip
- .rar
- .7z
- .exe
- .apk
- .iso
- .img
- .jpg
- .png
- .gif
- .bmp
- .ico
- .svg
- .xml
- .woff
- .woff2
- .ttf
- .eot
- .otf
- .csv
- .xls
- .xlsx
- .doc
- .docx
- .ppt
- .pptx
- .odt
- .ods
- .odp
- .odg
- .odf
Expected JSON on Redis per page: { "md5": "3ab3e9f9fcf1a62edfca58f00f1a4a8c", "domain": "example.com", "host": "subdomain.example.com", "url": "http://www.subdomain.example.com/page1", "tld": "com", "ip": "192.0.2.1", "port": 80, "depth": 1, "rank": 12345, "source": "HTTP/2 200 <html><head>...</head><body>...</body></html>", "filetype": "html" }