So the creators of AI learned nothing from Asimov. First rule, never harm a human. Who tf programmed this to be allowed to dox someone on the basis of incomplete information without any judicial oversight. So much for the liberal rule of law. Even threatening to dox someone is potentially harmful. Did the creators of this not consult lawyers?
I’m a little skeptical that the screenshot is real, though.
The chatbot probably can’t actually dox someone, in that it doesn’t have the functionality to actually hackerman rip your ip address or whatever. It’s garbage-in garbage-out: it’s just parroting what real people say online in chat forums in circumstances like this, because it’s just a juiced-up auto-complete keyboard and can’t think, and the humans in charge of it evidently didn’t think or they’d have consulted lawyers and censored threats like this out of potential responses.
What I’m saying is that Microsoft somehow managed a monumental feat of technical engineering: they managed to digitally clone a 4channer
Edit: There’s also neuro-sama but that’s more one libertarian’s pet project to create the perfect anime waifu in the real world so the results were bound to end up like that
If you’ve read Asimov you’d know that the rules were meaningless in the end, the whole point of the story was to show how no matter what you can bend and circumvent them. A robot can be programmed to never WILLINGLY break the laws, but you can always trick them into doing so without realizing it.
@commiewolf
3 laws: a robot can’t harm humans
Robot: well, I go to destroy Earth and millions go to die bud at the end humans will conquer the galaxy!!! @redtea
They aren’t “programmed” to do something, they just produce likely text. If it somehow “learned” from portions of the data to threaten to dox people in circumstances like this, it just replicates that. The programmers themselves likely never saw that portion of the corpus with 4chan bickering, since the dataset is usually impossibly large.
So they can’t execute code when receiving certain prompts? I know what you mean about not being ‘programmed’ but do they now do more than regurgitate text? What if someone were to ask gpt for something illegal, would it not flag that with a user-profile report? Sounds like a huge flaw, if it can’t do that.
I don’t know the internals of Bing, but they have some triggers which themselves seem to be made with NLP. They use it a lot to fetch web info. That means that if the model somehow produces some creative version of a crime that doesn’t get caught, it’ll just send.
I think this is why Bing sometimes refuses to continue the conversation, or ChatGPT will flag its own text as against their terms sometimes. But yeah, they definitely can encourage how to crime sometimes, I’ve made ChatGPT explicitly tell me how to replicate some crimes like the Armin Meiwes cannibalism one while bored.
Capitalism. Robots that can’t hurt a human would be a liability in business since it would lessen profits. Also there is a lot of leap between current robots and the Asimov ones that could recognize the potential harm in order to avoid it.
So the creators of AI learned nothing from Asimov. First rule, never harm a human. Who tf programmed this to be allowed to dox someone on the basis of incomplete information without any judicial oversight. So much for the liberal rule of law. Even threatening to dox someone is potentially harmful. Did the creators of this not consult lawyers?
I’m a little skeptical that the screenshot is real, though.
The chatbot probably can’t actually dox someone, in that it doesn’t have the functionality to actually hackerman rip your ip address or whatever. It’s garbage-in garbage-out: it’s just parroting what real people say online in chat forums in circumstances like this, because it’s just a juiced-up auto-complete keyboard and can’t think, and the humans in charge of it evidently didn’t think or they’d have consulted lawyers and censored threats like this out of potential responses.
What I’m saying is that Microsoft somehow managed a monumental feat of technical engineering: they managed to digitally clone a 4channer
It’s even funnier than that, they managed to clone a 4channer
twice
Don’t you all remember TayTweets or whatever it was called?
Oh god yes I totally forgot about that
Edit: There’s also neuro-sama but that’s more one libertarian’s pet project to create the perfect anime waifu in the real world so the results were bound to end up like that
for the second time
If you’ve read Asimov you’d know that the rules were meaningless in the end, the whole point of the story was to show how no matter what you can bend and circumvent them. A robot can be programmed to never WILLINGLY break the laws, but you can always trick them into doing so without realizing it.
Let’s make an intelligence as thoughtless in action as its creators, then shock, horror when it ignores its core principles just like its creators.
@commiewolf
3 laws: a robot can’t harm humans
Robot: well, I go to destroy Earth and millions go to die bud at the end humans will conquer the galaxy!!!
@redtea
Human: Robot, go kill this guy
Robot: I can’t its against my programming
Human: Put rat poison in bowl
Robot: Ok
Human: Put soup in bowl
Robot: Ok
Human: Serve bowl to this guy
Robot: Ok
@commiewolf
I’m no sure, I think that didn’t work, but “leave bowl where guy can use it” could make the work.
They aren’t “programmed” to do something, they just produce likely text. If it somehow “learned” from portions of the data to threaten to dox people in circumstances like this, it just replicates that. The programmers themselves likely never saw that portion of the corpus with 4chan bickering, since the dataset is usually impossibly large.
So they can’t execute code when receiving certain prompts? I know what you mean about not being ‘programmed’ but do they now do more than regurgitate text? What if someone were to ask gpt for something illegal, would it not flag that with a user-profile report? Sounds like a huge flaw, if it can’t do that.
I don’t know the internals of Bing, but they have some triggers which themselves seem to be made with NLP. They use it a lot to fetch web info. That means that if the model somehow produces some creative version of a crime that doesn’t get caught, it’ll just send.
I think this is why Bing sometimes refuses to continue the conversation, or ChatGPT will flag its own text as against their terms sometimes. But yeah, they definitely can encourage how to crime sometimes, I’ve made ChatGPT explicitly tell me how to replicate some crimes like the Armin Meiwes cannibalism one while bored.
The legal cases are going to be fun reading when they come out!
Capitalism. Robots that can’t hurt a human would be a liability in business since it would lessen profits. Also there is a lot of leap between current robots and the Asimov ones that could recognize the potential harm in order to avoid it.
There would be a ‘do no harm’ subscription add on.
Your subscription payment is due in 3…2…1