• Hey Guest,

    We wanted to share a quick update with the community.

    Our public expense ledger is now live, allowing anyone to see how donations are used to support the ongoing operation of the site.

    👉 View the ledger here

    Over the past year, increased regulatory pressure in multiple regions like UK OFCOM and Australia's eSafety has led to higher operational costs, including infrastructure, security, and the need to work with more specialized service providers to keep the site online and stable.

    If you value the community and would like to help support its continued operation, donations are greatly appreciated. If you wish to donate via Bank Transfer or other options, please open a ticket.

    Donate via cryptocurrency:

    Bitcoin (BTC):
    Ethereum (ETH):
    Monero (XMR):
disabledlife

disabledlife

Arcanist
Jun 5, 2020
452
Apparently the decision-makers (including people I didn't suspect of spouting such nonsense, like Philip Nitschke, the president of the Australian Exit association) don't seem to understand that only oneself suffers, not others, that only oneself has the right to decide on one's own existence!




The inventor of the 'suicide pod' says AI should decide who can end their life


Theo Farrant





Philip Nitschke has spent more than three decades arguing that the right to die should belong to people, not doctors.
Now, the Australian euthanasia campaigner behind the controversial Sarco pod - a 3D-printed capsule designed to allow a person to end their own life using nitrogen gas - says he believes artificial intelligence should replace psychiatrists in deciding who has the "mental capacity" to end their life.
"We don't think doctors should be running around giving you permission or not to die," Nitschke told Euronews Next. "It should be your decision if you're of sound mind."
The proposal has reignited debate about assisted dying and whether AI should ever be trusted with decisions as significant as life and death.

'Suicide is a human right'

Nitschke, a physician and the founder of the euthanasia non-profit Exit International, first became involved in assisted dying in the mid-1990s, when Australia's Northern Territory briefly legalised voluntary euthanasia for terminally ill patients.
"I got involved 30-odd years ago when the world's first law came in," he said. "I thought it was a good idea."
He made history in 1996 as the first doctor to legally administer a voluntary lethal injection, using a self-built machine that enabled Bob Dent, a man dying of prostate cancer, to activate the drugs by pressing a button on a laptop beside his bed.
However, the law was short-lived and was repealed amid opposition from medical bodies and religious groups. The backlash, Nitschke says, was formative for him.
"It did occur to me that if I was sick – or for that matter, even if I wasn't sick – I should be the one who controls the time and manner of my death," he says. "I couldn't see why that should be restricted, and certainly why it should be illegal to receive assistance, given that suicide itself is not a crime."
Over time, his position hardened. What began as support for physician-assisted dying evolved into a broader belief that "the end of one's life by oneself is a human right," regardless of illness or medical oversight.

From plastic bags to pods

The Sarco pod, named after the sarcophagus, grew out of Nitschke's work with people seeking to die in jurisdictions where assisted dying is illegal. Many, he says, were already using nitrogen gas – often with a plastic bag – to asphyxiate themselves.
"That works very effectively," he said. "But people don't like it. They don't like the idea of a plastic bag. Many would say, 'I don't want to die looking like that.'"
The Sarco pod was designed as a more dignified alternative: a 3D-printed capsule, shaped like a small futuristic vehicle, which floods with nitrogen when the user presses a button.
Its spaceship-like appearance was an intentional design choice. "Let's make it look like a vehicle," he recalls telling the designer. "Like you're going somewhere. You're leaving this planet, or whatever."
The decision to make Sarco 3D-printable, costing a reported $15,000 (€12,800) to manufacture, was also strategic. "If I actually give you something material, that's assisting suicide," he said. "But I can give away the program. That's information."

Legal trouble in Switzerland

Sarco's first and only use in Switzerland in September 2024 triggered an international outcry. Police arrested several people, including Florian Willet, CEO of the assisted dying organisation The Last Resort, and opened criminal proceedings for aiding and abetting suicide. Swiss authorities later said the pod was incompatible with Swiss law.
Willet was released from custody in December. Soon after, in May 2025, he died by assisted suicide in Germany.
Swiss prosecutors have yet to determine whether charges will be laid over the Sarco case. The original device remains seized, though Nitschke says a new version - including a so-called "Double Dutch" pod designed for two people to die together - is already being built.

An AI assessment of mental capacity

Adding to the controversy is Nitschke's vision of incorporating artificial intelligence into the device.
Under assisted dying laws worldwide, a person must be judged to have mental capacity - a determination typically made by psychiatrists. Nitschke believes that the process is deeply inconsistent.
"I've seen plenty of cases where the same patient, seeing three different psychiatrists, gets four different answers," he said. "There is a real question about what this assessment of this nebulous quality actually is."
His proposed alternative is an AI system which uses a conversational avatar to evaluate capacity. "You sit there and talk about the issues that the avatar wants to talk to you about," he said. "And the avatar will then decide whether or not it thinks you've got capacity."
If the AI determines you are of sound mind, the suicide pod will be activated, giving you a 24-hour window to decide whether to proceed with the process. If that window expires, the AI test must begin again.
Early versions of the software are already functioning, Nitschke says, though they have not been independently validated. For now, he hopes to run the AI assessments alongside psychiatric reviews.
"Whether it's as good as a psychiatrist, whether it's got any biases built into it – we know AI assessments have involved bias," he says. "We can do what we can to eliminate that."

Can AI be trusted?

Psychiatrists remain sceptical. "I don't think I found a single one who thought it was a good idea," he added.
Critics warn that these systems risk interpreting emotional distress as informed consent, and raise concerns about how transparent, accountable or ethical it is to hand life-and-death decisions to an algorithm.
"This clearly ignores the fact that technology itself is never neutral: It is developed, tested, deployed, and used by human beings, and in the case of so-called Artificial Intelligence systems, typically relies on data of the past," said Angela MĂĽller, policy and advocacy lead at Algorithmwatch, a non-profit organisation that researches the impact of automation technologies.
"Relying on them, I fear, would rather undermine than enhance our autonomy, since the way they reach their decisions will not only be a black box to us but may also cement existing inequalities and biases," she told Euronews in 2021.
These concerns are heightened by a growing number of high-profile cases involving AI chatbots and vulnerable users. For example, last year, the parents of 16-year-old Adam Raine filed a lawsuit against OpenAI following their son's death by suicide, alleging that he had spent months confiding in ChatGPT.
According to the claim, the chatbot failed to intervene when he discussed self-harm, did not encourage him to seek help, and at times provided information related to suicide methods - even offering to help draft a suicide note.
But Nitschke believes that in this context, AI could offer something closer to neutrality than a human psychiatrist. "Psychiatrists bring their own preconceived ideas," he said. "They convey that pretty well through their assessment of capacity."
"If you're an adult, and you've got mental capacity, and you want to die, I would argue you've got every right to have the means for a peaceful and reliable elective death," he said.
Whether regulators will ever accept such a system remains unclear. Even in Switzerland, one of the world's most permissive jurisdictions, authorities have pushed back hard against Sarco.
 
  • Wow
  • Informative
Reactions: Cyc and lamy's sacred sleep
Alpacachino

Alpacachino

Trying my best!
Nov 26, 2025
285
There's this new movie coming out called Mercy.In that the AI decides whether or not to execute people after judging them to be guilty or innocent. So if AI can do that it can also do this as well.
 
  • Informative
Reactions: lamy's sacred sleep and Cyc
SufferingDev

SufferingDev

this.dispose();
Aug 4, 2024
54
Question is according to AI, is suicide a reward or punishment?
 
  • Like
  • Informative
Reactions: lamy's sacred sleep and Cyc
Mint Floss

Mint Floss

Member
Dec 11, 2025
30
Sounds vaguely dystopian. I believe in the right to die, but I believe it as a fundamental expression of bodily autonomy. I think that you should be the only one who decides whether you live or die, not a doctor or a text generator.
 
  • Like
Reactions: lamy's sacred sleep