https://keepthefuturehuman.ai
time.com/7308857/chin...
papers.ssrn.com/sol3/papers....
time.com/7308857/chin...
papers.ssrn.com/sol3/papers....
NY’s RAISE Act, which would require the largest AI developers to have a safety plan, just passed the legislature.
Call Governor Hochul at 1-518-474-8390 to tell her to sign the RAISE Act into law.
NY’s RAISE Act, which would require the largest AI developers to have a safety plan, just passed the legislature.
Call Governor Hochul at 1-518-474-8390 to tell her to sign the RAISE Act into law.
NY’s RAISE Act, which would require the largest AI developers to have a safety plan, just passed the legislature.
Call Governor Hochul at 1-518-474-8390 to tell her to sign the RAISE Act into law.
NY’s RAISE Act, which would require the largest AI developers to have a safety plan, just passed the legislature.
Call Governor Hochul at 1-518-474-8390 to tell her to sign the RAISE Act into law.
Less than a year ago, Sam Altman said he wanted to see powerful AI regulated by an international agency to ensure "reasonable safety testing"
But now he says "maybe the companies themselves put together the right framework"
Less than a year ago, Sam Altman said he wanted to see powerful AI regulated by an international agency to ensure "reasonable safety testing"
But now he says "maybe the companies themselves put together the right framework"
Sam Altman says "I would really point to our track record"
The track record: Superalignment team disbanded, FT reporting last week that OpenAI is cutting safety testing time down from months to just *days*.
Sam Altman says "I would really point to our track record"
The track record: Superalignment team disbanded, FT reporting last week that OpenAI is cutting safety testing time down from months to just *days*.
Chinese AI runs on American tech that we freely give them! That's not "Art of the Deal"!
Chinese AI runs on American tech that we freely give them! That's not "Art of the Deal"!
OpenAI used to give staff months to safety test. Now it's just days, per great reporting from Cristina Criddle at the FT. 🧵
OpenAI used to give staff months to safety test. Now it's just days, per great reporting from Cristina Criddle at the FT. 🧵
Safety testers have only been given days to conduct evaluations.
One of the people testing o3 said "We had more thorough safety testing when [the technology] was less important"
Safety testers have only been given days to conduct evaluations.
One of the people testing o3 said "We had more thorough safety testing when [the technology] was less important"
We've made it super easy to contact your senator:
— It takes just 60 seconds to fill our form
— Your message goes directly to both of your senators
controlai.com/take-a...
We've made it super easy to contact your senator:
— It takes just 60 seconds to fill our form
— Your message goes directly to both of your senators
controlai.com/take-a...
The brief was filed by Harvard Law Professor Lawrence Lessig, who also reps OpenAI whistleblowers.
Here are the highlights 🧵
The brief was filed by Harvard Law Professor Lawrence Lessig, who also reps OpenAI whistleblowers.
Here are the highlights 🧵
Elon Musk: "20% likely, maybe 10%"
Ted Cruz: "On what time frame?"
Elon Musk: "5 to 10 years"
Elon Musk: "20% likely, maybe 10%"
Ted Cruz: "On what time frame?"
Elon Musk: "5 to 10 years"
That's why FLI Executive Director Anthony Aguirre has published a new essay, "Keep The Future Human".
🧵 1/4
That's why FLI Executive Director Anthony Aguirre has published a new essay, "Keep The Future Human".
🧵 1/4
SB 53 does two things: 🧵
SB 53 does two things: 🧵
✍️ Apply by January 10 at the link in the replies:
✍️ Apply by January 10 at the link in the replies:
Our proposal is to implement a Conditional AI Safety Treaty. Read the details below.
www.theguardian.com/technology/2...
Our proposal is to implement a Conditional AI Safety Treaty. Read the details below.
www.theguardian.com/technology/2...
New on the FLI blog:
-Why might AIs resist shutdown?
-Why is this a problem?
-What other instrumental goals could AIs have?
-Could this cause a catastrophe?
🔗 Read it below:
New on the FLI blog:
-Why might AIs resist shutdown?
-Why is this a problem?
-What other instrumental goals could AIs have?
-Could this cause a catastrophe?
🔗 Read it below:
Our keynote speakers:
• Yoshua Bengio
• Dawn Song
• Iason Gabriel
Submit abstract by 15 February:
Our keynote speakers:
• Yoshua Bengio
• Dawn Song
• Iason Gabriel
Submit abstract by 15 February:
🧵
🧵
After a machine learning librarian released and then deleted a dataset of one million Bluesky posts, several other bigger datasets have appeared in its place—including one of almost 300 million posts.
🔗 www.404media.co/bluesky-post...
After a machine learning librarian released and then deleted a dataset of one million Bluesky posts, several other bigger datasets have appeared in its place—including one of almost 300 million posts.
🔗 www.404media.co/bluesky-post...
Rosie Campbell says she has been “unsettled by some of the shifts over the last ~year, and the loss of so many people who shaped our culture”.
She says she “can’t see a place” for her to continue her work internally.
Rosie Campbell says she has been “unsettled by some of the shifts over the last ~year, and the loss of so many people who shaped our culture”.
She says she “can’t see a place” for her to continue her work internally.