Replies: 5 comments 1 reply
-
According to https://anubis.techaro.lol/docs/design/why-proof-of-work, the aim seems to be to put a computational burden on the bad actor ?
|
Beta Was this translation helpful? Give feedback.
-
So the threat model is only aggressive scraping by single entities? A botnet of compromised devices where bots are able to retain cookies and execute Javascript (or with capabilities to solve anubis' challenge in its native code instead) won't be phased by having the first requests of each node take a second longer. |
Beta Was this translation helpful? Give feedback.
-
I agree with you, but there might be something we're missing that makes this system more efficient ? Someone with more knowledge should probably chim in, maybe @Xe ? |
Beta Was this translation helpful? Give feedback.
-
Hi, The intent behind the proof of work mechanism is to be an antagonistic jab at the precise signature of attacks that generative AI companies do. It relies on cookies and the proof of work logic was intended to waste CPU time to change the economics of mass scraping. I wanted to make something like the Cloudflare turnstile prompt but self-hostable, and I tried proof of work as a way to prove that the client can do javascript by doing a complicated math program that's easy to verify on the server. I'm also kinda getting skeptical about the benefits of proof of work and am starting to experiment with other options. I'm going to do some research and then publish it in the near future. I've been working on low or no JavaScript methods for Anubis (I found a combination of factors that scrapers don't support but even basic browsers do), and will be glad to publish vague info in the near future. I don't have hard info yet, but please trust that I am working on things and that I do care. Be well, Xe |
Beta Was this translation helpful? Give feedback.
-
Thank you very much for the reply, I think it confirms that I understand the limitations correctly. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello, I run my own L7 DDoS protection using a vaguely similar concept as anubis, but without PoW. I just check if the client can execute Javascript at all.
I am unclear on the benefit of using PoW. Suppose bots can execute Javascript and retain cookies, then they can solve my challenge once and then flood requests. Isn't the same true for a PoW challenge that is presented only once? I understand that there might be levels to how much Javascript a bot supports, so a more elaborate challenge is better, but I have not observed this in the wild at all and that would suggest that using lots of different Javascript features would be beneficial. Proof of work just needs to execute SHA256 hashing many times, which is not complex in terms of language features, so that's not it.
From my understanding, proof of work slows down the rate of initial requests from bots, but does nothing to limit individual bots afterwards, if the bots can solve it - unless the challenge is presented at every page load.
So what I'm asking is: Why is the amount of computational work done in the challenge of any relevance to protecting against request floods?
Beta Was this translation helpful? Give feedback.
All reactions