Connect with us


OpenAI introduces Jukebox, a new AI model that generates genre-specific music with lyrics



Synthetic intelligence analysis laboratory OpenAI at present debuted a brand new generative mannequin that’s capable of making music known as Jukebox. It’s technologically spectacular, even when the outcomes sound like mushy variations of songs that may really feel acquainted. In response to the publish on OpenAI’s blog, the researchers selected to work on music as a result of it’s onerous. And even when they’re not precisely what I’d name music, the outcomes the researchers obtained had been spectacular; there are recognizable chords and melodies and phrases (typically).

The best way OpenAI did it was additionally fascinating. They used uncooked audio to coach the mannequin — which spits out uncooked audio in return — as an alternative of utilizing “symbolic music,” like participant pianos use, as a result of symbolic music doesn’t embody voices. To get their outcomes, the researchers first used convolutional neural networks to encode and compress uncooked audio after which used what they name a transformer to generate new compressed audio that was then upsampled to show it again into uncooked audio. Have a chart!



The strategy is much like how OpenAI developed a previous music-making AI known as MuseNet, however Jukebox goes a step additional by producing its personal lyrics in collaboration (the corporate used the phrase “co-written”) with OpenAI researchers. Not like MuseNet, which used MIDI information, these fashions had been skilled on an uncooked dataset of 1.2 million songs (600,000 in English) and used metadata and lyrics scraped from Lyrics Dew. (Artist and style information had been included to higher the mannequin’s output.) Even so, because the researchers write, there are limitations.

“Whereas Jukebox represents a step ahead in musical high quality, coherence, size of the audio pattern, and talent to the situation on artist, style, and lyrics, there’s a vital hole between these generations and human-created music,” they write. “For instance, whereas the generated songs present native musical coherence, comply with conventional chord patterns, and might even characteristic spectacular solos, we don’t hear acquainted bigger musical buildings akin to choruses that repeat.”

There are additionally different issues with the experiment. As the author and podcaster Cherie Hu identified on Twitter, Jukebox is probably a copyright catastrophe. (It’s price noting that simply this week, Jay-Z tried to make use of copyright strikes to take down synthesized audio of himself from YouTube.)

All of that stated, Jukebox is a reasonably fascinating achievement that pushes the boundaries of what’s potential. Even when the musicians OpenAI confirmed Jukebox to thought it wanted some work. Go pay attention to your self!


Read Twitter’s update on the huge hack — 8 accounts may have had private messages stolen




On Friday evening, Twitter issued its first full blog post about what happened after the biggest security lapse in the company’s history, one that led to attackers getting hold of some of the highest profile Twitter accounts in the world — including Democratic presidential candidate Joe Biden, President Barack Obama, Tesla CEO Elon Musk, Microsoft co-founder Bill Gates, Kanye West, Michael Bloomberg, and more.

The bad news: Twitter has now revealed that the attackers may indeed have downloaded the private direct messages (DMs) of up to 8 individuals while conducting their Bitcoin scam, and were able to see “personal information” including phone numbers and email addresses for every account they targeted.

That’s because Twitter has confirmed that attackers attempted to download the entire “Your Twitter Data” archive for those 8 individuals, which contains DMs among other info.

They may even have DMs that the 8 individuals tried to delete, given that Twitter stores DMs on its servers as long as either party to a conversation keeps them around — we learned last February that you can retrieve deleted DMs by downloading the “Your Twitter Data” archive, even if you’ve deleted them yourself. The archive can also include other personal information like your address book and any images and videos you may have attached to those private messages as well.

The good news: Twitter claims none of those 8 accounts were verified users, suggesting that none of the highest-profile individuals targeted had their data downloaded. It’s still possible that the hackers looked at their DMs, but no, Democratic presidential candidate Joe Biden and others probably didn’t just get their DMs stolen outright.

According to Twitter, hackers targeted 130 accounts; successfully triggered a password reset, logged in, and tweeted from 45 of them; and only attempted to download data for that “up to eight” non-verified accounts. We do not know how many accounts they may have scanned for personal information or how many DMs they might have simply accessed or read.

And for the larger batch of 130 accounts — including high-profile ones like the Democratic presidential candidate — Twitter says they may have been able to see other sorts of personal information. Twitter also allows logged in users to see a location history of the places and times that they’ve logged in, as an example.

Twitter previously confirmed that its own internal employee tools were used to facilitate the account takeovers, and suspected that its employees had fallen for a social engineering scam — now, the company is going further to say definitively that the attackers “successfully manipulated a small number of employees and used their credentials to access Twitter’s internal systems, including getting through our two-factor protections.”

That aligns with the prevailing theories, which you can read more about in the NYT’s impressive report here.

There are still many, many more questions and serious investigations still ahead.

You can read Twitter’s full blog post here.

Continue Reading


Wikimedia is writing new policies to fight Wikipedia harassment




Wikipedia plans to crack down on harassment and other “toxic” behavior with a new code of conduct. The Wikimedia Foundation Board of Trustees, which oversees Wikipedia among other projects, voted on Friday to adopt a more formal moderation process. The foundation will draft the details of that process by the end of 2020, and until then, it’s tasked with enforcing stopgap anti-harassment policies.

“Harassment, toxic behavior, and incivility in the Wikimedia movement are contrary to our shared values and detrimental to our vision and mission,” said the board in a statement. “The board does not believe we have made enough progress toward creating welcoming, inclusive, harassment-free spaces in which people can contribute productively and debate constructively.”

The trustee board gave the Wikimedia Foundation four specific directives. It’s supposed to draft a “binding minimum set of standards” for behavior on its platforms, shaped by input from the community. It needs to “ban, sanction, or otherwise limit the access” of people who break that code, as well as create a review process that involves the community. And it must “significantly increase support for and collaboration with community functionaries” during moderation. Beyond those directives, the Wikimedia Foundation is also supposed to put more resources into its Trust and Safety team, including more staff and better training tools.

The trustee board says its goal is “developing sustainable practices and tools that eliminate harassment, toxicity, and incivility, promote inclusivity, cultivate respectful discourse, reduce harms to participants, protect the projects from disinformation and bad actors, and promote trust in our projects.”

Wikipedia’s volunteer community can be highly dedicated but intensely combative, launching edit wars over controversial topics and harshly enforcing editorial standards in a way that may drive away new users. The Wikimedia Foundation listed harassment as one factor behind its relative lack of female and gender-nonconforming editors, who have complained of being singled out for abuse. At the same time, the project grew out of a freewheeling community-focused ethos — and many users object to the kind of top-down enforcement you’d find on a commercial web platform.

These problems came to a head last year, when the Wikimedia Foundation suspended a respected but abrasive editor who other users accused of relentless harassment. The intervention bypassed Wikipedia’s normal community arbitration process, and several administrators resigned during the backlash that followed.

The board of trustees doesn’t mention that controversy, saying only that the vote “formalizes years’ of longstanding efforts by individual volunteers, Wikimedia affiliates, Foundation staff, and others to stop harassment and promote inclusivity on Wikimedia projects.” But on a discussion page, one editor cited the suspension to argue that the Wikimedia Foundation shouldn’t interfere with Wikipedia’s community moderation — while others said a formal code of conduct would have reduced the widespread confusion and hostility around it.

Amid all this, Wikipedia has become one of the internet’s most widely trusted platforms. YouTube, for instance, uses Wikipedia pages to rebut conspiracy videos. That’s raised the stakes and created a huge incentive for disinformation artists to target the site. Friday’s vote suggests the Wikimedia Foundation will take a more active role in moderating the platform, even if we don’t know exactly how.

Continue Reading


Twitter’s new reply-limiting feature is already changing how we talk on the platform




Twitter is testing a new feature that lets users decide who can reply to their tweets, the company announced on Wednesday, and some accounts are already using it in some interesting new ways.

Previously, anybody could reply to anybody on Twitter (as long as their profile wasn’t private or blocked). But now, if you’re part of the test, you can decide if you want to allow replies from everyone, only people you follow, or only people you tag — which, if you don’t tag anyone, means that no one can reply at all. Deciding who can reply to which tweet on a tweet-by-tweet basis could change how some people use the social media platform in significant ways.

Interviews on Twitter, for example, could be much more streamlined, and NBC’s Twitter account for Meet the Press has already shown an example of how. Meet the Press announced an interview with NBC News’ Andrea Mitchell and only allowed people it tagged in the tweet to reply — which, in this case, was only Mitchell. What followed almost felt like a long tweetstorm, split between two accounts.

Part of Meet the Press’ Twitter interview with Andrea Mitchell.

Limiting how users can interact with live Twitter interviews does mean that emergent conversations won’t occur as easily in the replies — you can theoretically still quote tweet messages even if those tweets have replies limited, and conversations could be started that way. Still, the limitation means interviews may not feel quite as organic as they sometimes were before.

On the plus side, the feature does make interviews much easier to follow, which would have been handy for, say, the messy #KaraJack interview between Twitter CEO Jack Dorsey and Recode’s Kara Swisher back in February 2019. Dorsey had some fun referencing that mess by not allowing replies to this tweet:

Limiting replies could also be used to help prevent the spread of spoilers for upcoming movies, TV shows, and video games. On Thursday, for example, Naughty Dog posted screenshots of its upcoming PS4 title The Last of Us Part II, and limited replies to people it tagged — which was no one.

The Last of Us Part II, which launches on June 19th, promises to have a deeply engaging story, and the studio is doing everything it can on social media to keep that story under wraps until the game launches, hence the move to disable replies. Naughty Dog is also trying to stop people from sharing spoilers from major leaks of the game that hit the web in late April; Sony and Naughty Dog disabled YouTube comments on the latest trailer, too.

There is the potential that limiting replies could be used more nefariously. If politicians or public officials post misinformation and don’t allow replies, people wouldn’t be able to easily fact-check a tweet in the replies that would appear under the original misinformation, where it could do the most help in correcting the record. And interestingly, the ACLU is arguing that public officials blocking replies would violate the First Amendment — President Donald Trump has yet to make use of the feature, but it will surely inspire debate if and when he does.

In the case of misinformation, if the original account isn’t allowing replies to a tweet, users can still use a quote tweet to comment. It’s not the most ideal solution — a quote tweet only appears on your feed, so a fact-check, for example, likely won’t be seen by everyone who saw the original tweet — but it’s still a way to weigh in if you aren’t able to directly reply.

Here’s an example of how that could look. However, bear in mind that in this instance, the quoted tweet did allow replies because it was posted before Twitter implemented reply blocking.

That all said, not allowing replies can have more lighthearted uses. I’ll admit I laughed at Dorsey’s tweet that I included earlier, and Lil Nas X continued to prove he is a Twitter all-star with this great prank:

There are bound to be new ideas that emerge as more Twitter users get access to reply blocking, and I’m interested to see how people use the feature in creative ways. But I’m nervous to find out what diabolical things fast food brands will say when they can limit replies only to each other.

Continue Reading