i wonder why everyone is implementing captchas and rate limits on web request basis these days
its not like bots are efficient web scrapers. nothing new there. they are very slow for doing web requests. traditional scrapers are much faster.
so the reason must be that some entities are running much more traditional web scrapers these days.
is someone selling extremely cheap bandwidth to some entities, which can then consume all upload of servers or what the fuck is the problem?
or is someone using scrapers that do micro requests to servers' dynamic resources to consume their cpu?