Month: January 2024
NYT Connections today: See hints and answers for January 31
Connections is a New York Times word game that’s all about finding the “common threads between words.” How to solve the puzzle.
Connections is the latest New York Times word game that’s captured the public’s attention. The game is all about finding the “common threads between words.” And just like Wordle, Connections resets after midnight and each new set of words gets trickier and trickier—so we’ve served up some hints and tips to get you over the hurdle.
If you just want to be told today’s puzzle, you can jump to the end of this article for January 31’s Connections solution. But if you’d rather solve it yourself, keep reading for some clues, tips, and strategies to assist you.
What is Connections?
The NYT‘s latest daily word game has become a social media hit. The Times credits associate puzzle editor Wyna Liu with helping to create the new word game and bringing it to the publications’ Games section. Connections can be played on both web browsers and mobile devices and require players to group four words that share something in common.
Each puzzle features 16 words and each grouping of words is split into four categories. These sets could comprise of anything from book titles, software, country names, etc. Even though multiple words will seem like they fit together, there’s only one correct answer. If a player gets all four words in a set correct, those words are removed from the board. Guess wrong and it counts as a mistake—players get up to four mistakes until the game ends.
Players can also rearrange and shuffle the board to make spotting connections easier. Additionally, each group is color-coded with yellow being the easiest, followed by green, blue, and purple. Like Wordle, you can share the results with your friends on social media.
Here’s a hint for today’s Connections categories
Want a hit about the categories without being told the categories? Then give these a try:
Yellow: Being a bit too happy in my opinion
Green: Wedding planning
Blue: Poetic alliteration
Purple: Types of pits
Featured Video For You
Here are today’s Connections categories
Need a little extra help? Today’s connections fall into the following categories:
Yellow: Merriment
Green: Booked for A Wedding
Blue: Rhymes
Purple: ___Pit
Looking for Wordle today? Here’s the answer to today’s Wordle.
Ready for the answers? This is your last chance to turn back and solve today’s puzzle before we reveal the solutions.
Drumroll, please!
The solution to Connections #234 is…
What is the answer to Connections today
Merriment: CHEER, GLEE, FESTIVITY, MIRTH
Booked for a Wedding: BAND, CATERER, FLORIST, OFFICIANT
Rhymes: CHOIR, FIRE, LIAR, FRYER
__Pit: BARBECUE, ORCHESTRA, SNAKE, TEA
Don’t feel down if you didn’t manage to guess it this time. There will be new Connections for you to stretch your brain with tomorrow, and we’ll be back again to guide you with more helpful hints.
Is this not the Connections game you were looking for? Here are the hints and answers to yesterday’s Connections.
UPS to lay off 12,000 employees as it turns to AI for efficiency
Parcel delivery giant UPS announced on Tuesday that it plans to cut 12,000 jobs, representing approximately 2.5% of its global workforce. The company said the layoff is due to economic challenges and labor disputes that drove away some customers. UPS
The post UPS to lay off 12,000 employees as it turns to AI for efficiency first appeared on TechStartups.
Parcel delivery giant UPS announced on Tuesday that it plans to cut 12,000 jobs, representing approximately 2.5% of its global workforce. The company said the layoff is due to economic challenges and labor disputes that drove away some customers. UPS […]
The post UPS to lay off 12,000 employees as it turns to AI for efficiency first appeared on TechStartups.
Microsoft AI Engineer Says Company Thwarted Attempt To Expose DALL-E 3 Safety Problems
Todd Bishop reports via GeekWire: A Microsoft AI engineering leader says he discovered vulnerabilities in OpenAI’s DALL-E 3 image generator in early December allowing users to bypass safety guardrails to create violent and explicit images, and that the company impeded his previous attempt to bring public attention to the issue. The emergence of explicit deepfake images of Taylor Swift last week “is an example of the type of abuse I was concerned about and the reason why I urged OpenAI to remove DALL-E 3 from public use and reported my concerns to Microsoft,” writes Shane Jones, a Microsoft principal software engineering lead, in a letter Tuesday to Washington state’s attorney general and Congressional representatives.
404 Media reported last week that the fake explicit images of Swift originated in a “specific Telegram group dedicated to abusive images of women,” noting that at least one of the AI tools commonly used by the group is Microsoft Designer, which is based in part on technology from OpenAI’s DALL-E 3. “The vulnerabilities in DALL-E 3, and products like Microsoft Designer that use DALL-E 3, makes it easier for people to abuse AI in generating harmful images,” Jones writes in the letter to U.S. Sens. Patty Murray and Maria Cantwell, Rep. Adam Smith, and Attorney General Bob Ferguson, which was obtained by GeekWire. He adds, “Microsoft was aware of these vulnerabilities and the potential for abuse.”
Jones writes that he discovered the vulnerability independently in early December. He reported the vulnerability to Microsoft, according to the letter, and was instructed to report the issue to OpenAI, the Redmond company’s close partner, whose technology powers products including Microsoft Designer. He writes that he did report it to OpenAI. “As I continued to research the risks associated with this specific vulnerability, I became aware of the capacity DALL-E 3 has to generate violent and disturbing harmful images,” he writes. “Based on my understanding of how the model was trained, and the security vulnerabilities I discovered, I reached the conclusion that DALL-E 3 posed a public safety risk and should be removed from public use until OpenAI could address the risks associated with this model.”
On Dec. 14, he writes, he posted publicly on LinkedIn urging OpenAI’s non-profit board to withdraw DALL-E 3 from the market. He informed his Microsoft leadership team of the post, according to the letter, and was quickly contacted by his manager, saying that Microsoft’s legal department was demanding that he delete the post immediately, and would follow up with an explanation or justification. He agreed to delete the post on that basis but never heard from Microsoft legal, he writes. “Over the following month, I repeatedly requested an explanation for why I was told to delete my letter,” he writes. “I also offered to share information that could assist with fixing the specific vulnerability I had discovered and provide ideas for making AI image generation technology safer. Microsoft’s legal department has still not responded or communicated directly with me.” “Artificial intelligence is advancing at an unprecedented pace. I understand it will take time for legislation to be enacted to ensure AI public safety,” he adds. “At the same time, we need to hold companies accountable for the safety of their products and their responsibility to disclose known risks to the public. Concerned employees, like myself, should not be intimidated into staying silent.” The full text of Jones’ letter can be read here (PDF).
Read more of this story at Slashdot.
Todd Bishop reports via GeekWire: A Microsoft AI engineering leader says he discovered vulnerabilities in OpenAI’s DALL-E 3 image generator in early December allowing users to bypass safety guardrails to create violent and explicit images, and that the company impeded his previous attempt to bring public attention to the issue. The emergence of explicit deepfake images of Taylor Swift last week “is an example of the type of abuse I was concerned about and the reason why I urged OpenAI to remove DALL-E 3 from public use and reported my concerns to Microsoft,” writes Shane Jones, a Microsoft principal software engineering lead, in a letter Tuesday to Washington state’s attorney general and Congressional representatives.
404 Media reported last week that the fake explicit images of Swift originated in a “specific Telegram group dedicated to abusive images of women,” noting that at least one of the AI tools commonly used by the group is Microsoft Designer, which is based in part on technology from OpenAI’s DALL-E 3. “The vulnerabilities in DALL-E 3, and products like Microsoft Designer that use DALL-E 3, makes it easier for people to abuse AI in generating harmful images,” Jones writes in the letter to U.S. Sens. Patty Murray and Maria Cantwell, Rep. Adam Smith, and Attorney General Bob Ferguson, which was obtained by GeekWire. He adds, “Microsoft was aware of these vulnerabilities and the potential for abuse.”
Jones writes that he discovered the vulnerability independently in early December. He reported the vulnerability to Microsoft, according to the letter, and was instructed to report the issue to OpenAI, the Redmond company’s close partner, whose technology powers products including Microsoft Designer. He writes that he did report it to OpenAI. “As I continued to research the risks associated with this specific vulnerability, I became aware of the capacity DALL-E 3 has to generate violent and disturbing harmful images,” he writes. “Based on my understanding of how the model was trained, and the security vulnerabilities I discovered, I reached the conclusion that DALL-E 3 posed a public safety risk and should be removed from public use until OpenAI could address the risks associated with this model.”
On Dec. 14, he writes, he posted publicly on LinkedIn urging OpenAI’s non-profit board to withdraw DALL-E 3 from the market. He informed his Microsoft leadership team of the post, according to the letter, and was quickly contacted by his manager, saying that Microsoft’s legal department was demanding that he delete the post immediately, and would follow up with an explanation or justification. He agreed to delete the post on that basis but never heard from Microsoft legal, he writes. “Over the following month, I repeatedly requested an explanation for why I was told to delete my letter,” he writes. “I also offered to share information that could assist with fixing the specific vulnerability I had discovered and provide ideas for making AI image generation technology safer. Microsoft’s legal department has still not responded or communicated directly with me.” “Artificial intelligence is advancing at an unprecedented pace. I understand it will take time for legislation to be enacted to ensure AI public safety,” he adds. “At the same time, we need to hold companies accountable for the safety of their products and their responsibility to disclose known risks to the public. Concerned employees, like myself, should not be intimidated into staying silent.” The full text of Jones’ letter can be read here (PDF).
Read more of this story at Slashdot.
YouTube TV’s Now Lets You Customize Your Multiview Experiences
Google has confirmed to Cord Cutters News that you can now customize what games you can watch in your Multiview window. The keyword here is “games” because this feature is still limited to just sporting events at this time. From the report: One of YouTube TV’s best features is the ability to offer the option to watch up to four sporting or news events at once on the same screen. The only downside has been the fact that customers have been unable to pick what games are in these windows. Instead, YouTube TV gives you a number of premade multiview options to pick from. Now, though, YouTube TV seems to be testing your ability to pick what games you want on your TV.
Yesterday, YouTube TV started to give some NBA League Pass subscribers the ability to pick which games they want to watch from a handful of games in a list. From there, YouTube TV would create a multiview channel for you to watch the games you pick. Google says this feature is coming to all devices that support multiview and you can only create these channels from preselected NBA games. Sadly, you can’t pick any channel you want but only from a list of preselected games to create your own multiview channel.
Read more of this story at Slashdot.
Google has confirmed to Cord Cutters News that you can now customize what games you can watch in your Multiview window. The keyword here is “games” because this feature is still limited to just sporting events at this time. From the report: One of YouTube TV’s best features is the ability to offer the option to watch up to four sporting or news events at once on the same screen. The only downside has been the fact that customers have been unable to pick what games are in these windows. Instead, YouTube TV gives you a number of premade multiview options to pick from. Now, though, YouTube TV seems to be testing your ability to pick what games you want on your TV.
Yesterday, YouTube TV started to give some NBA League Pass subscribers the ability to pick which games they want to watch from a handful of games in a list. From there, YouTube TV would create a multiview channel for you to watch the games you pick. Google says this feature is coming to all devices that support multiview and you can only create these channels from preselected NBA games. Sadly, you can’t pick any channel you want but only from a list of preselected games to create your own multiview channel.
Read more of this story at Slashdot.