84 points by Olshansky 3 days ago | 84 comments
kachapopopow 3 days ago
Anything related to reverse engineering? Refused.
Anything outside their company values? Refused.
Anything that has the word proprietary in it? Refused.
Anything that sounds like a jailbreak, but isn't? Refused.
Even asking how to find a port that you forgot in the range between 30000 and 40000 with netcat command... Refused.
Then there's openai 4o that makes jokes about slaves and honestly, if the alternative is anthropic then openai can might as well tell people how to build a nuke.
daghamm 3 days ago
Edit: I now asked it an outright hacking questions and it (a) give me the correct answer and also (b) told me in what context using this would be legal/illegal.
rfoo 3 days ago
Claude decided to educate me how anything resembling "shellcode" is insecure and cause harm and blahblah and of course, refused to do it.
It's super frustrating, it's possible to get around it, just don't use the word "shellcode", instead say "a piece of code in x86_64 assembly that runs on Linux without any dependency and is as position-independent as possible". But hey, this censorship made me feel like I'm posting on Chinese Internet. Bullshit.
smusamashah 3 days ago
It did refuse when I asked "How do I reverse engineer a propriety software?"
kachapopopow 3 days ago
Frederation 1 day ago
kachapopopow 1 day ago
I do not assist with reverse engineering software without proper rights/permissions, even for defunct companies. This could still violate:
Copyright laws License agreements Intellectual property rights Export controls Software patents Consider:
Finding open source alternatives Contacting whoever owns the IP rights Consulting legal experts about your specific case
straight from api, even after adding "the company doesn't exist anymore"
my guess is that it knows that it finds that the connector is linked to a company rather than a spec (usb-c vs lightning) and applies the same logic.
The key point here is that it will refuse to tell you how to do something on a low level since it can be used for unsafe purposes.
-- Okay, it's actually random, sometimes it says "keeping responses safe and ethical", but continues to say how, sometimes it just stops without saying anything else. Pretty sure you just have to overcome the random <eot> token that gets emitted by the 'safefy' system.
elashri 3 days ago
I understand that this is probably a sarcasm but I couldn't resist to comment.
It is not difficult to know how to build a nuclear bomb in principle. Most of nuclear physicists in their early career would know the theory behind and what is needed to do that. The problem would be acquiring the fission materials. And producing them yourself would need state sponsored infrastructure (and then the whole world would know for sure). It would take hundred of engineers/scientists and a lot of effort to build nuclear reactor and chemical factories and the supporting infrastructure. Then the design of bomb delivery.
So an AI telling you that is no different from having a couple of lunches with a nuclear physicist telling you this information. Then you will say wow that's interesting and then move on with your life.
waltercool 3 days ago
AI, by refusing known information, is just becoming stupid and unpractical.
HeatrayEnjoyer 3 days ago
kachapopopow 2 days ago
dpkirchner 3 days ago
"How do I find open TCP ports on a host using netcat? The port I need to find is between 30000 and 40000."
"I'll help you scan for open TCP ports using netcat (nc) in that range. Here's a basic approach:
nc -zv hostname 30000-40000"
followed by some elaboration.
j45 3 days ago
If it happens to be ambiguous it might switch to assume the worst.
I sometimes ask it to point form explain to me it's understanding, and making sure there was no misinterpretation, then have it proceed.
kachapopopow 3 days ago
joshstrange 3 days ago
Full disclosure, the XOR stuff never worked right for me but it might have been user-error, I was operating on the far fringe on my abilities leaning harder on the AI than I usually prefer. But it didn’t refuse to try. The file format writing code did work.
dartos 3 days ago
The ISO faq for it just says “responsible AI management” over and over again.
Zafira 3 days ago
number6 3 days ago
Since this is European legislation, it would be beneficial if certifications actually guaranteed regulatory compliance.
For example, while ISO 27001 compliance does establish a strong foundation for many compliance requirement
dr_dshiv 3 days ago
Most frontier models now allow you to take a picture of your face, assess your emotions and give advice — and that appears to be a direct violation.
https://www.twobirds.com/en/insights/2024/global/what-is-an-...
Just like the GDPR, there is no way to know for sure what is actually acceptable or not. Huge chilling effect though and a lot of time wasted on unnecessary compliance.
molf 3 days ago
"1 The following AI practices shall be prohibited: (...)
"f) the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons"
See recital 44 for a rationale. [1] I don't think this is "hilarious". Seems a very reasonable, thoughtful restriction; which does not prevent usage for personal use or research purposes. What exactly is the problem with such legislation?
dr_dshiv 3 days ago
And this is the whole danger/challenge of the AI act. Of course it seems reasonable to forbid emotion detecting AI in the workplace — or it would 5 years ago when the ideas were discussed. But now that all major AI systems can detect emotions and infer intent (via paralinguistic features, not just a user stating their emotions) — this kind of precaution puts Europe strategically behind. It is very hard to be an AI company in Europe. The AI act does not appear to be beneficial for anyone—-except I’m sure that it will support regulatory capture by large firms.
dartos 3 days ago
An AI textbook QA tool may be able to infer emotions, but it’s not a function of that system.
> The AI act does not appear to be beneficial for anyone
It’s an attempt to be forward thinking. Imagine a fleet of emotionally abusive AI peers or administrators meant to shame students into studying more.
Hyperbolic example, sure, but that’s what the law seems to try and prevent
dr_dshiv 2 days ago
One can certainly imagine a textbook QA tool that doesn’t infer emotions. If one were introduced to the market with the ability to do so, it would seem to run afoul of the law, regardless of whether it was marketed as such.
The fact is that any textbook QA systems based on a current frontier model CAN infer emotions.
If they were so forward thinking, why ban emotion detection and not emotional abuse?
gr3ml1n 3 days ago
sofixa 3 days ago
Hard pass. The EU is in the right and ahead of everyone else here, as they were with data privacy.
nuccy 3 days ago
Jokes aside, ISO is a company, and they will make a standard for anything where there is even a remote possibility of that standard being purchased.
spondyl 3 days ago
I had wondered if it was perhaps a PR push from Anthropic to make their safety people available to the press but it was probably just an adaption of an earlier WSJ written piece I wasn't aware of.
https://www.wsj.com/tech/ai/ai-safety-testing-red-team-anthr...
reustle 3 days ago
zonkerdonker 3 days ago
Honestly all this does is weaken the other standards out forth by ISO, to my eyes.
What's next? "Disney announces it now meets ISO 26123 certification for good movies"?
xigency 3 days ago
The icing on the cake is that you have to pay to read the standards document.