this post was submitted on 13 Jul 2023
1 points (100.0% liked)

Healthcare Security on Clinicians-Exchange

2 readers
1 users here now

Internet security and healthcare security topics.

NO CASE CONSULTATIONS! PUBLIC FORUM! NO CONFIDENTIAL DETAILS ON CLIENTS ALLOWED!

This is NOT the place to ask for therapeutic help for yourself or a loved one. Nothing written in this community shall be construed to constitute the formation of a professional relationship between therapist and client.

( Counseling Therapy Psychology Mental Health Social Work Worker Psychologist Counselor Research therapist therapists ACA APA NASW CACREP NCC PsyD AMHCA NBCC ACAC ACES AMCD ACCA NCSW-C ARCA ASERVIC depression anxiety bipolar PTSD CBT trauma self esteem abuse mindfulness mood disorder coping skills cookies tracking hacking 3rdpartytrackers HIPAA BAA privacy dataprivacy webbeacons videoconference televideo telehealth databrokers )

founded 1 year ago
MODERATORS
 

I am informed that a new product called #Mentalyc has entered the market. It's mission is to write psychotherapy notes for clinicians AND to gather a non-identifiable dataset for research into clinical best practices.

I have no firm opinion yet on Mentalyc, but it's expensive ($39-$69 per month per clinician) and I'd personally need to know a lot more about what's in that dataset and who is benefiting from it.

So I'm asking the community for thoughts on what acceptable ethical and practical criteria would be for an AI to write psychotherapy notes or medical notes.

Here are MY thoughts so far:

  1. REQUIRED: The AI either: 1a) Invents NOTHING and takes 100% of the information in the note from the clinician, or 1b) Prompts the clinician for additional symptoms often present in the condition before writing the note, or 1c) Presents a very clear information page before writing that lets the clinician approve, delete, or modify anything the AI got creative with and was not told explicitly to include. (So, for example, in an experiment with Bard a clinician found that Bard added sleep problems as an invented symptom to a SOAP note for a person with depression and anxiety. This is a non-bizarre symptom addition that makes lots of sense, is very likely, but would have to be approved as valid for the person in question.)

  2. OPTIONAL: The AI is on MY computer and NOT reporting anything back to the Internet. This will not be on everyone's list, but I've seen too many BAA subcontractors playing loose with the definition of HIPAA (medical privacy) and there is more money to be made in data sales than clinician subscriptions to an AI.

  3. OPTIONAL: Inexpensive (There are several free AI tools emerging.)

  4. OPTIONAL: Open Source

  5. Inputting data to the AI to write the note is less work than just writing the note personally. (Maybe a complex tablet-based clickable form? But then, a pretty high percentage of a note can be in a clickable form format anyway.)

  6. The AI does NOT record the entire session and then write a note based upon what was said. (It might accept dictation of note directions sort of like doctors dictate notes to transcribers today.)

I think I may be envisioning a checkbox and drop-down menu form along with a space for a clinician to write a few keywords and phrases, then the AI (on my laptop) takes this and writes a note -- possibly just a paragraph to go along with the already existing form in the official note. I think. It's early days in my thinking.

I have this same discussion set-up here: https://mastodon.clinicians-exchange.org/@admin/110153358784312024

You do not have to have a Mastodon account to read it -- only to post. This should also get the attention of computer science, AI researchers, and other technical folks as well as counseling professionals.

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here