That Tech Guy
banner
that-tech-guy.mastodon.social.ap.brid.gy
That Tech Guy
@that-tech-guy.mastodon.social.ap.brid.gy
Technology geek... not an expert at everything, but definitely an enthusiast.

[bridged from https://mastodon.social/@that_tech_guy on the fediverse by https://fed.brid.gy/ ]
arstechnica.com
January 27, 2026 at 7:01 PM
Let’s be honest… when you saw this story you though two things happened;
1) You thought “Dial-up still exists?”
2) You heard a familiar voice in your head say “You’ve got mail!”

#aol #americaonline #internet #technology
https://www.nbcnews.com/tech/tech-news/aol-dial-up-silenced-rcna234655
With a final screech, AOL's dial-up service goes silent
One of the earliest consumer internet options, AOL's dial-up service was once the most common way for people to access the early web.
www.nbcnews.com
October 5, 2025 at 12:20 PM
Happy "Talk Like A Pirate Day"!!!!
To celebrate, sit down and write a few "R" scripts today!
(if you don't get it right away... give it a minute.)
#rprogramming #talklikeapirateday #programming #technology
September 19, 2025 at 6:18 PM
Getting ready for the Apple event at 1pm ET…
#apple #iphone #iOS #ipad #technology
September 9, 2025 at 4:16 PM
The next Apple Event will be on September 9th @ 10am PT.
#apple #iphone #technology #applewatch #airpods #appleintelligence
August 26, 2025 at 8:25 PM
Tapbots (the maker of Ivory for Mastodon), has announced Phoenix for Bluesky to be released later this year. #bluesky #socialmedia #atprotocol #activitypub
https://tapbots.com/phoenix/
March 5, 2025 at 9:50 PM
Reposted by That Tech Guy
Original post on sfba.social
sfba.social
March 5, 2025 at 8:13 PM
Good news!!!! We’re not all going to die in 2032!

Asteroid 2024 YR4 now expected to miss Earth in flyby
#astronomy #astroid #armageddon2032 #space #science #technology […]
Original post on mastodon.social
mastodon.social
February 26, 2025 at 5:52 PM
Reposted by That Tech Guy
Bringing this highly requested feature to #mastodon and the fediverse is not as trivial as some might think, but quote posts are coming. Here is our latest write-up about our progress:

https://blog.joinmastodon.org/2025/02/bringing-quote-posts-to-mastodon/
Bringing Quote Posts to Mastodon
<p>Quote Posts are a popular feature of many social media platforms. They offer the ability to share another person’s post to one’s own followers, while adding a comment.</p><p>We want to share our thinking process in implementing Quote Posts in Mastodon, and explain why it has taken us some time to do so.</p><h2 id="background">Background</h2><p>In the past couple of years, as Mastodon has grown, we’ve spent time meeting with community leaders across a spectrum of interests, to understand their needs. We have learned that many groups use Quote Posts as their primary means to build consensus and community on other platforms. Quote Posts used in this way convey trust and respect for the original post, building on or enhancing an original idea.</p><p>On the other hand, back when Mastodon was first developed, we had seen the feature used for malicious purposes on other platforms, for example, to intentionally quote someone out of context, to direct hate speech and harass people. At that time, it was an easy decision for us: Mastodon would not have quote posts.</p><p>The continued popularity of requests for us to implement the feature has shown that their absence prevents many people from joining the Fediverse. We want to add Quote Posts to help people to transition away from proprietary, billionaire-owned social media to the open social web.</p><p>If you’ve been following our project, we first mentioned that we were considering bringing Quote Posts to Mastodon <a href="https://blog.joinmastodon.org/2023/05/a-new-onboarding-experience-on-mastodon/">back in 2023</a>. During 2024, we applied for <a href="https://nlnet.nl/project/Mastodon-Quoting/">a grant from the NGI0 Entrust Fund</a>, to support our research and implementation efforts. With that support, we have done a lot of research and thinking, and we are sharing the outcomes of this work with you here.</p><h2 id="challenges">Challenges</h2><p>There are two primary elements to bringing Quote Posts to Mastodon: user-centric, and technical.</p><p>As explained above, the team started out with a shared view that Quote Posts can be misused. Many people simply do not want their content to be reframed by others; or they may find that if it is reposted, they receive unwelcome attention.</p><p>In order to mitigate these issues, we plan to include several features in our implementation:</p><ul><li>You will be able to choose whether your posts can be quoted at all.</li><li>You will be notified when someone quotes you.</li><li>You will be able to withdraw your post from the quoted context at any time.</li></ul><p>We also want to build a tight integration for Quote Posts with the reporting functionality, to help people to feel more safe.</p><p>On the technical side, the concept of Quote Posts is not standardised - there is no agreed way to build this feature into a W3C ActivityPub implementation so that it is automatically interoperable with the other applications in the Fediverse. Today, some third party Mastodon clients approximate quote posts, by showing a preview if a post contains a link to another post - but those efforts do not come with any of the features that we want to include. We want to collaborate to create a specification, so that we can encourage a better (and safer) way for all clients to have this functionality. We’ve spent time talking with users, with other Fediverse software developers (which include user facing applications), and with the developers of our own client apps, about how they might expect to see or implement Quote Posts. The output from this will be concrete proposals to help everyone building on the Fediverse.</p><h2 id="the-process">The process</h2><p>We are in the process of writing ActivityPub extensions (which we will publish as <a href="https://codeberg.org/fediverse/fep">Fediverse Enhancement Proposals</a>), in collaboration with other developers, to cover these features for any ActivityPub software that chooses to use them. These specifications can allow everyone to efficiently implement this same feature in an interoperable way. We’ve shared <a href="https://socialhub.activitypub.rocks/t/pre-fep-quote-posts-quote-policies-and-quote-controls/5031">initial work on this</a> for ActivityPub developers, and we’re also posting the <a href="https://github.com/mastodon/specs-background/blob/main/quote-posts/quote-posts-research-and-goals.md">background research</a> we performed, that was discussed with others - in both cases, these are being posted as deeper-dives for technical audiences and other implementers; they do not represent final outputs and choices.</p><p>In addition to these proposals, this feature will impact many parts of the Mastodon codebase, including the ActivityPub-handling code, the public API, web user interface, moderation panel and capabilities, the administration panel, and the official iOS and Android applications. We’re working on it, but Quote Posts will still take more time to develop.</p><h2 id="the-future">The future</h2><p>We know that Quote Posts are a source of concern for some members of the community, and highly-requested by others. We’re committed to sharing our progress, and listening to your feedback. Thanks for being a part of the federated open social web, and for using Mastodon.</p><div class="not-prose rounded-md p-8 flex flex-col items-center mt-8 border border-blurple-500"><h3 class="text-lg font-bold mb-2 text-center">Thank you for supporting Mastodon</h3><p class="text-md mb-8 text-center">We develop and maintain the free and open-source software that powers the social web. There is no capital behind this—we rely entirely on your support through platforms like Patreon.</p><div class="flex flex-col md:flex-row gap-4"><a class="flex-0 text-sm items-center justify-center rounded-md border-2 border-blurple-500 bg-blurple-500 py-2 px-4 !font-semibold text-white transition-colors hover:border-blurple-600 hover:bg-blurple-600 flex" href="https://patreon.com/mastodon">Donate on Patreon</a> <a class="flex-0 text-sm items-center justify-center rounded-md border-2 border-blurple-500 bg-blurple-500 py-2 px-4 !font-semibold text-white transition-colors hover:border-blurple-600 hover:bg-blurple-600 flex" href="https://donate.stripe.com/00g5l42h8ezY3YcaEE">Donate directly</a> <a class="flex-0 text-sm items-center justify-center rounded-md border-2 border-blurple-500 py-2 px-4 !font-semibold text-blurple-500 transition-colors hover:border-blurple-600 hover:text-blurple-600 flex" href="https://joinmastodon.org/sponsors">View our sponsors</a></div></div>
blog.joinmastodon.org
February 14, 2025 at 4:09 PM
techcrunch.com
January 20, 2025 at 5:08 PM
January 13, 2025 at 10:37 PM
Reposted by That Tech Guy
Threads and Bluesky have created viable alternatives to the platform once known as Twitter. But while the two services may share some of the same goals, they’ve shown very different visions for how text-based social networks should operate. Read more at @Engadget. #threads #bluesky #socialmedia […]
Original post on flipboard.social
flipboard.social
December 31, 2024 at 5:54 PM
What would happen if they started watching porn? (maybe we shouldn't be asking those types of questions.)
#robot #robots #surgery #medicine #technology

https://gizmodo.com/robots-are-learning-to-conduct-surgery-on-their-own-by-watching-videos-2000544229?
Robots Are Learning to Conduct Surgery on Their Own by Watching Videos
<p>The artificial intelligence boom is already starting to creep into the medical field through the form of AI-based visit summaries and analysis of patient conditions. Now, new research demonstrates how AI training techniques similar to those used for ChatGPT could be used to train surgical robots to operate on their own.</p> <p>Researchers from John Hopkins University and Stanford University built a training model using video recordings of human-controlled robotic arms performing surgical tasks. By learning to imitate actions on a video, the researchers believe they can reduce the need to program each individual movement required for a procedure. From the <a href="https://www.washingtonpost.com/science/2024/12/22/robots-learn-surgical-tasks/"><em>Washington Post</em></a>:</p> <div class="od-wrapper od-wrapper-both od-background"> <div class="Mobile_Pos1 od-mobile" id="optidigital-adslot-Mobile_Pos1" style="display:none"></div><div class="Content_1 od-desktop" id="optidigital-adslot-Content_1" style="display:none"></div> </div> <blockquote><p>The robots learned to manipulate needles, tie knots and suture wounds on their own. Moreover, the trained robots went beyond mere imitation, correcting their own slip-ups without being told ― for example, picking up a dropped needle. Scientists have already begun the next stage of work: combining all of the different skills in full surgeries performed on animal cadavers.</p></blockquote> <p>To be sure, robotics have been used in the surgery room for years now—back in 2018, the “surgery on a grape” meme highlighted how robotic arms can assist with surgeries by providing a heightened level of precision. Approximately <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC9225798/">876,000 robot-assisted surgeries</a> were conducted in 2020. Robotic instruments can reach places and perform tasks in the body where a surgeon’s hand will never fit, and they do not suffer from tremors. Slim, precise instruments can spare nerve damage. But robotics are typically guided manually by a surgeon with a controller. The surgeon is always in charge.</p> <p>The concern by skeptics of more autonomous robots is that AI models like ChatGPT are not “intelligent,” but rather simply mimic what they have already seen before, and do not understand the underlying concepts they are dealing with. The infinite variety of pathologies in an incalculable variety of human hosts poses a challenge, then—what if the AI model has not seen a specific scenario before? Something can go wrong during surgery in a split second, and what if the AI has not been trained to respond?</p> <div class="od-wrapper od-wrapper-both od-background"> <div class="Mobile_Pos2 od-mobile" id="optidigital-adslot-Mobile_Pos2" style="display:none"></div><div class="Content_2 od-desktop" id="optidigital-adslot-Content_2" style="display:none"></div> </div> <p>At the very least, autonomous robots used in surgeries would need to be approved by the Food and Drug Administration. In other cases where doctors are using AI to summarize their patient visits and make recommendations, FDA approval is not required because the doctor is technically supposed to review and endorse any information they produce. That is concerning because there is already evidence that AI bots will <a href="https://gizmodo.com/doctors-say-ai-is-introducing-slop-into-patient-care-2000543805">make bad recommendations</a>, or hallucinate and include information in meeting transcripts that was never uttered. How often will a tired, overworked doctor rubber-stamp whatever an AI produces without scrutinizing it closely?</p> <p>It feels reminiscent of recent reports regarding how soldiers in Israel are <a href="https://www.washingtonpost.com/technology/2024/12/29/ai-israel-war-gaza-idf/">relying on AI to identify attack targets</a> without scrutinizing the information very closely. <span class="css-1jxf684 r-bcqeeo r-1ttztb7 r-qvutc0 r-poiln3">“Soldiers who were poorly trained in using the technology attacked human targets without corroborating [the AI] predictions at all,” a <em>Washington Post</em> <a href="https://www.washingtonpost.com/technology/2024/12/29/ai-israel-war-gaza-idf/">story</a> reads. “At certain times the only corroboration required was that the target was a male.” Things can go awry when humans become complacent and are not sufficiently in the loop. </span></p> <div class="od-wrapper od-wrapper-both od-background"> <div class="Mobile_Pos3 od-mobile" id="optidigital-adslot-Mobile_Pos3" style="display:none"></div><div class="Content_3 od-desktop" id="optidigital-adslot-Content_3" style="display:none"></div> </div> <p>Healthcare is another field with high stakes—certainly higher than the consumer market. If Gmail summarizes an email incorrectly, it is not the end of the world. AI systems incorrectly diagnosing a health problem, or making a mistake during surgery, is a much more serious problem. Who in that case is liable? The <em>Post</em> interviewed the director of robotic surgery at the University of Miami, and this is what he had to say:</p> <div class="wpds-c-PJLV article-body" data-qa="article-body"> <blockquote> <p class="wpds-c-heFNVF wpds-c-heFNVF-iPJLV-css overrideStyles font-copy" data-apitype="text" data-contentid="FCPHSRMLX5DSPNUALJUF2MKF5A" data-el="text" data-testid="drop-cap-letter" dir="null">“The stakes are so high,” he said, “because this is a life and death issue.” The anatomy of every patient differs, as does the way a disease behaves in patients.</p> <p class="wpds-c-heFNVF wpds-c-heFNVF-iPJLV-css overrideStyles font-copy" data-apitype="text" data-contentid="SZ4DH2GX4RCQTDJKZABTMMSZ3E" data-el="text" data-testid="drop-cap-letter" dir="null">“I look at [the images from] CT scans and MRIs and then do surgery,” by controlling robotic arms, Parekh said. “If you want the robot to do the surgery itself, it will have to understand all of the imaging, how to read the CT scans and MRIs.” In addition, robots will need to learn how to perform keyhole, or laparoscopic, surgery that uses very small incisions.</p> </blockquote> <p data-apitype="text" data-contentid="SZ4DH2GX4RCQTDJKZABTMMSZ3E" data-el="text" data-testid="drop-cap-letter" dir="null">The idea that AI will ever be infallible is hard to take seriously when no technology is ever perfect. Certainly, this autonomous technology is interesting from a research perspective, but the blowback from a botched surgery conducted by an autonomous robot would be monumental. Who do you punish when something goes wrong, who has their medical license revoked? Humans are not infallible either, but at least patients have the peace of mind of knowing they have gone through years of training and can be held accountable if something goes wrong. AI models are crude simulacrums of humans, behave sometimes unpredictably, and have no moral compass.</p> <div class="od-wrapper od-wrapper-both od-background"> <div class="Mobile_Pos4 od-mobile" id="optidigital-adslot-Mobile_Pos4" style="display:none"></div><div class="Content_4 od-desktop" id="optidigital-adslot-Content_4" style="display:none"></div> </div> <p data-apitype="text" data-contentid="SZ4DH2GX4RCQTDJKZABTMMSZ3E" data-el="text" data-testid="drop-cap-letter" dir="null">Another concern is whether relying too much on autonomous robots to conduct surgeries could end up resulting in doctors having their own abilities and knowledge atrophy; similar to how facilitating dating through apps results in relevant social skills becoming rusty.</p> <p data-apitype="text" data-contentid="SZ4DH2GX4RCQTDJKZABTMMSZ3E" data-el="text" data-testid="drop-cap-letter" dir="null">If doctors are tired and overworked—a reason researchers suggested why this technology could be valuable— perhaps the systemic problems causing a shortage should be addressed instead. It has been widely reported that the U.S. is experiencing an extreme shortage of doctors due to the <a href="https://nupoliticalreview.org/2023/04/06/how-to-create-a-physician-shortage-the-effect-of-medical-education-barriers/">increasing inaccessibility of the field</a>. The country is on track to experience a shortage of 10,000 to 20,000 surgeons by 2036, according to the <a href="https://www.facs.org/for-medical-professionals/news-publications/news-and-articles/bulletin/2024/april-2024-volume-109-issue-4/physician-workforce-data-suggest-epochal-change/#:~:text=In%202024%2C%20the%20US%20public%20will%20need%20the%20services%20of,origination%20far%20beyond%20the%20field.">American Association of Medical Colleges</a>.</p> </div>
gizmodo.com
December 31, 2024 at 5:47 PM
December 12, 2024 at 6:30 PM
techcrunch.com
December 10, 2024 at 6:21 PM