OpenAI disbands its robotics research team

11    17 Jul 2021 20:22 by u/kek

3 comments

3
Considering what they did to AI Dungeon, this is probably a good thing
2
What'd they do to AI dungeon I am not up to date on the juicy drama
4
Check out their subreddit. They started (poorly) censoring people's private thoughts, never notified customers of a data breach, called their customers a bunch of pedos, etc. >https://www.reddit.com/r/AIDungeon/comments/n2hcc2/me_looking_through_this_subreddit_first_time_in/ TL;DR : Latitude have implemented a filtering system that flags certain inputs from the user (More on this later) without notifying anyone, suffered a data breach over a week before that incident, and have had fairly poor communication on this matter. So this whole situation started a few days ago, near the beginning of the week if I remember right. Users were noticing the AI was refusing to generate in certain situations, leading to them realizing that the devs had apparently instituted a censor of some sort. This went for about a day or two before Latitude finally commented, saying that yes, they had implemented a censor. Many were unhappy about that already at this stage, but then more came in from both the Discord and the subreddit. On the subreddit, someone had discovered a security flaw that allowed them to read others' stories, including usernames from what I've heard. They reported the breach to Latitude who then claimed to fix it, managed to do so again through the exact same methods, and then Latitude managed to properly fix it (supposedly). The user in question was very professional about it, but said it would be unclear if anyone else had exploited that in the past. While this was happening, the Discord was blowing up and demanding answers from the devs, who at this point were holding out for their official statement (Located in the pinned thread). After the statement was released (Which already did a very poor job at assuaging the communities' worries,) devs clarified on Discord that if your story was flagged it would be put into a queue to be moderated - at which point they would see if the flag was correct and either ignore it in the case of an incorrect flag or examine a user's other stories to see if they were "Abusing the platform", possibly banning them. While that already inflamed worries of privacy among the community, this all was made even worse by the poor communication of the devs. The update was rolled out without any notice, they never notified the userbase of the breach (The person who discovered the exploit in the first place was the first sign of it and the devs still haven't acknowledged it), a Co-founder declared they'd be fine with the game dying on this hill, and the devs have been very vague about what this censor is for. They've stated it's for cp (Whilst written text is legal in their jurisdiction it isn't unreasonable for them to have moral disagreements with this), but the AI flags many, many things that aren't. They've vaguely implied they'll be going after other things beyond what is being censored now (They also hit bestiality and possibly other things). To top it all off, OpenAI apparently released their ToU recently, basically tearing away the possible cover of "We had to do this". Whether or not one agrees with this, they have handled the situation exceptionally poorly.