Practice economics · 14 min read · Field Notes

The Therapist Side-Gig Economy: Why Your Licensing Exam Did Not Prepare You for This

2026-05-05 Matthew Sexton, LCSW All Field Notes

I sat for the Texas LCSW exam in a windowless test center and answered one hundred and seventy multiple-choice questions about ethics, scope of practice, and the developmental stages of a six-year-old. None of those questions asked me how to price ninety minutes of clinical supervision. None of them asked how to set up a private-pay rate sheet that does not undercut the local market while still feeding two kids. None of them asked which HIPAA-eligible video platform to use, or how to write a referral protocol to a hospital you do not work at.

The exam wanted to know if I knew the difference between Tarasoff I and Tarasoff II. Fair enough. The exam did not want to know if I knew how to run a small business, because the test bank assumes I do not need to. That assumption is wrong, and it has been wrong for a long time.

The size of the side gig

The most cited number for therapist part-time work comes out of the U.S. Bureau of Labor Statistics, which puts roughly one in five mental-health and substance-abuse social workers in part-time employment status, and which tracks self-employment among clinical psychologists and counselors at significantly higher rates than the all-occupation average.[1] The American Psychological Association’s 2023 Practitioner Pulse survey reported that a non-trivial share of psychologists and counselors describe their primary employment as “solo private practice” with at least one secondary income stream — supervision, consultation, contract work for an agency, telehealth panel work, or training.[2]

The Association of Social Work Boards reports its candidate pipeline with similar texture: a meaningful percentage of newly licensed clinical social workers enter solo practice within thirty-six months of licensure, and within that group a sizable subset run a second contracted clinical role at the same time.[3] Whether you call it a side practice, a moonlight panel, contract telehealth, supervisory hours, or just “the Tuesday-night clients,” the operational shape is the same. Two practices. Two scheduling systems. Two clinical responsibilities. Often two licensure jurisdictions if the second income is telehealth across a state line. Always one human clinician trying to hold all of it.

Why graduate school cannot help you with this

Graduate clinical training is structured around a model that no longer reflects how most clinicians earn a living. The CSWE accreditation framework for an MSW program, the COAMFTE framework for an MFT program, and the APA program-accreditation standards for clinical psychology all foreground supervised practice in agency or hospital settings. The economic model implicit in those frameworks is one where the program graduates an associate-licensed clinician into a salaried agency role for two to four years of supervised practice, then upward into either a senior agency role or full private practice at full license.

That economic model existed. It is not the dominant model anymore for the simple reason that agency salaries have not kept pace with inflation, the cost of supervision has not been borne by employers in many regions, and the average mental-health agency now operates under reimbursement pressures that have eroded the salaried-clinician role from the inside.[4] The economic outlet that compensates for that erosion is the side gig: the supplementary cash-pay client base, the supervisory income, the contract panel work, the contracted training engagements.

Nothing in graduate clinical training prepares a clinician for the operational load of running that side gig at the same time as a primary clinical role. There is no required course on rate-setting. There is no required course on cancellation-policy enforcement. There is no required course on how to triage the gap between a client who texts you at 9 PM “not feeling great” and a client who is in active suicidal ideation. There is no required course on how to maintain a HIPAA-eligible tool stack across two practices on a graduate-trained clinician’s budget.

The dropout problem you are silently absorbing

One of the most replicated findings in the psychotherapy outcomes literature is that a large share of clients drop out before therapy is dose-effective. Wierzbicki and Pekarik’s 1993 meta-analysis of 125 studies found a weighted-mean dropout rate of 46.86% across outpatient samples, with significant variation by setting and population.[5] That number has held up reasonably well in subsequent reviews. The Practitioner Pulse data and similar field surveys find that solo and small-group practices in 2023 and 2024 still report session-2 retention numbers below 60% on first-time intakes.[2]

You absorb that dropout rate as a sole proprietor. The agency does not absorb it for you. There is no payroll department that backfills the empty 4:30 PM slot when the client did not return for session 2. There is no salaried floor that protects your monthly take-home when 30% of intakes never make it to a third visit. The side-gig clinician, even more than the agency clinician, is downstream of the engagement gap.

The literature on what reduces dropout is unflashy. The factors that move retention are not novel modalities. They are operational discipline: a structured intake that orients the client, a measurement-based check-in that reflects clinical change back to both clinician and client, a between-session continuity layer that tells the client they are seen between Tuesdays, and a clinically realistic safety floor.[6][7] The strongest single predictor of outcome that we have measured across decades of psychotherapy research is the therapeutic alliance, and alliance is built in the first two sessions or it is not built at all.[8]

What measurement-based care actually requires

Measurement-based care is one of the better-evidenced operational practices in mental-health treatment. Lewis et al. (2019) reviewed the implementation literature and found that routine outcome monitoring, when built into the clinical workflow rather than bolted on, is associated with significantly improved depression outcomes versus treatment as usual.[9]

The validated instruments are not exotic. The PHQ-9 is the standard screening and severity measure for depression, with reliable-change and minimal-clinically-important-difference cuts that have been replicated across populations.[10] The GAD-7 is its anxiety counterpart with similar psychometric properties.[11] The C-SSRS is the standard for structured suicide-risk assessment, validated across clinical and emergency-department populations.[12]

The instruments are not the problem. The problem is the operational practice. Doing the PHQ-9 once at intake and then never again is theatrical, not clinical. The published reliable-change threshold for the PHQ-9 is approximately five points; the threshold for clinically meaningful response is comparable.[10] Reading those thresholds requires repeated administration on a clinically defensible schedule, with delta tracked, with the result visible to the clinician at session start, and with the trajectory shared back to the client. That is what the literature on measurement-based care actually requires of the clinician, and that is the operational load that most solo-practice and side-practice clinicians do not have time to do well by hand.[13]

Homework, homework, homework

The cognitive-behavioral literature has consistently found that homework completion is associated with improved outcomes. Kazantzis and colleagues’ meta-analyses across two decades have found a small-to-moderate but reliable effect of homework completion on symptom outcomes in CBT, particularly for depression and anxiety.[14]

The catch is that homework completion is a behavior the clinician cannot observe between sessions. The homework that does not get done is invisible to the therapist until the client walks back into the room and apologetically confesses, or does not confess, that the worksheet stayed in the kitchen drawer for six days. A between-session check-in — even a thirty-second prompt — turns that invisible behavior into a visible signal. A clinician who can see, before session, that the client did three of five thought-record entries and rated the assignment as moderately helpful walks into session with material to work with. A clinician who cannot see that walks in with a guess.

Crisis at 9 PM

Every clinician who has done this work for more than two years has a story about the 9 PM text. The story is structurally similar across clinicians: a client sends a vague distress message after office hours; the clinician feels their stomach drop; the clinician weighs how much to read into the message; the clinician calls or texts back; sometimes it is fine, sometimes it is not.

The Stanley-Brown Safety Plan Intervention is the most widely cited evidence-based brief intervention for individuals at suicide risk.[15] SAMHSA’s 988 Suicide & Crisis Lifeline is the national 24/7 backstop and has been operational since 2022.[16] The C-SSRS provides the structured screening framework.[12] All of these tools work better when they are integrated into a continuous clinical surface than when they live in a static PDF in a folder a client cannot find at 9 PM.

The side-gig clinician is more exposed to this risk than the agency clinician, not less. The side-gig clinician does not have an after-hours coverage rotation. The side-gig clinician is the after-hours coverage rotation. Tools that translate static safety plans into clinically realistic continuous surfaces — structured screening at intake, between-session check-ins that flag escalation, fall-back to 988 with proper documentation, clinician notification within twenty-four hours rather than seven days — reduce the load on the individual clinician and improve the floor for the client.

Telehealth across state lines

The American Psychological Association’s telepsychology guidelines, originally published in 2013 and updated in 2018, set out the practice-standard expectations for clinical work delivered via video, including jurisdictional licensure requirements, technology selection, informed consent, and emergency planning.[17] Telehealth has been demonstrated effective for the major outpatient mental-health conditions in numerous trials and meta-analyses; the modality is not the limiting factor.[18]

The limiting factor for the side-gig clinician working a telehealth panel is operational. PSYPACT, the Counseling Compact, and the Social Work Compact have improved interstate licensure portability for participating jurisdictions, but the patchwork is still uneven, and a clinician with a license in two states is still operating two compliance perimeters. Each state has its own informed-consent requirements, its own mandatory-reporting thresholds, its own licensure-board complaint process, and its own definition of practicing within scope. None of that was on the licensing exam.

What the operational gap actually looks like

If you sit down with a side-gig clinician at 9 PM on a Sunday and ask what is in their tool stack, you will hear something close to this: an EHR they pay for and partially use; a video platform that may or may not have a Business Associate Agreement; a scheduling tool with limited integration; a paper or PDF intake packet emailed manually; a Google Sheet for outcome measures the clinician filled out twice and then stopped filling out; a personal phone for client contact, with text messages stored on a device they also use to scroll TikTok. The stack is not the result of a clinical decision. The stack is the result of fifteen separate one-off decisions made under time pressure across two years.

That stack is not a HIPAA-defensible posture, but more importantly it is not a clinically defensible posture. It is the posture that produces the 46% session-2 dropout rate. It is the posture that produces the after-hours text-stomach-drop. It is the posture that produces the unbilled three hours of admin per day that BLS quietly tracks under “other duties.”[1]

What VibeCheck does about it

VibeCheck is not a marketing tool, and it is not a workflow optimizer. It is the clinical layer between sessions for licensed therapists running solo or small-group practices. Structured intake before session 1. Validated measures (PHQ-9, GAD-7, PCL-5) on a clinically defensible schedule with reliable-change thresholds surfaced to the clinician. Brief between-session check-ins that produce a one-screen pre-session brief. Stanley-Brown safety plans that are continuous rather than static. C-SSRS where the clinical context warrants. Clinician notification on flagged check-ins inside twenty-four hours rather than at the next session. AI-assisted — clinician-reviewed — note scaffolding, because the alternative is the dinner-hour notes problem.

The architecture is HIPAA-grade because that is not optional. The application lives at app.vibecheck.luxury, on AWS, with executed BAA. PHI lives in PostgreSQL with pgcrypto encryption at rest. Vertex AI under the Google Cloud BAA covers any AI-assisted workflow, with prompts and responses scoped to the specific clinical task. No PHI on the marketing site you are reading. No PHI on GitHub. The clinical decisions in the product were made by a licensed clinician.

What this site is not

This is not a place to argue that AI replaces therapists. The literature does not support that framing, and I will write a separate post about why. This is also not a place to claim that a piece of software fixes burnout, because burnout is not a software problem. The pitch is narrower and, I think, more honest: there is an operational gap between what a licensed clinician is asked to do and what a licensed clinician has time to do, and the gap is wider for clinicians running a side practice than for clinicians inside agencies. Tools that are designed by clinicians, audited by clinicians, and built around the actual clinical workflow can make that gap measurably smaller.

The licensing exam was not going to teach me how to do this. The market was going to teach me, and it did, and now I would like to make the curve less steep for the next clinician sitting in that windowless test center.

References

  1. U.S. Bureau of Labor Statistics, Occupational Outlook Handbook: Mental Health Counselors and Marriage and Family Therapists; Social Workers; Psychologists. 2024 edition. bls.gov
  2. American Psychological Association. (2023). Practitioner Pulse Survey 2023. apa.org
  3. Association of Social Work Boards. (2024). Annual ASWB Pass Rate and Practice Analysis Report. aswb.org
  4. Beck, A. J., Singer, P. M., Buche, J., Manderscheid, R., & Buerhaus, P. (2018). Improving data for behavioral health workforce planning. Psychiatric Services, 69(3), 291–293.
  5. Wierzbicki, M., & Pekarik, G. (1993). A meta-analysis of psychotherapy dropout. Professional Psychology: Research and Practice, 24(2), 190–195.
  6. Norcross, J. C., & Wampold, B. E. (2011). Evidence-based therapy relationships: Research conclusions and clinical practices. Psychotherapy, 48(1), 98–102.
  7. Barber, J. P., Connolly, M. B., Crits-Christoph, P., Gladis, L., & Siqueland, L. (2000). Alliance predicts patients’ outcome beyond in-treatment change in symptoms. Journal of Consulting and Clinical Psychology, 68(6), 1027–1032.
  8. Flückiger, C., Del Re, A. C., Wampold, B. E., & Horvath, A. O. (2018). The alliance in adult psychotherapy: A meta-analytic synthesis. Psychotherapy, 55(4), 316–340.
  9. Lewis, C. C., Boyd, M., Puspitasari, A., et al. (2019). Implementing measurement-based care in behavioral health: A review. JAMA Psychiatry, 76(3), 324–335.
  10. Kroenke, K., Spitzer, R. L., & Williams, J. B. W. (2001). The PHQ-9: Validity of a brief depression severity measure. Journal of General Internal Medicine, 16(9), 606–613.
  11. Spitzer, R. L., Kroenke, K., Williams, J. B. W., & Löwe, B. (2006). A brief measure for assessing generalized anxiety disorder: The GAD-7. Archives of Internal Medicine, 166(10), 1092–1097.
  12. Posner, K., Brown, G. K., Stanley, B., et al. (2011). The Columbia–Suicide Severity Rating Scale: Initial validity and internal consistency findings from three multisite studies with adolescents and adults. American Journal of Psychiatry, 168(12), 1266–1277.
  13. Korotitsch, W. J., & Nelson-Gray, R. O. (1999). An overview of self-monitoring research in assessment and treatment. Psychological Assessment, 11(4), 415–425.
  14. Kazantzis, N., Whittington, C., & Dattilio, F. (2010). Meta-analysis of homework effects in cognitive and behavioral therapy: A replication and extension. Clinical Psychology: Science and Practice, 17(2), 144–156.
  15. Stanley, B., & Brown, G. K. (2012). Safety planning intervention: A brief intervention to mitigate suicide risk. Cognitive and Behavioral Practice, 19(2), 256–264.
  16. Substance Abuse and Mental Health Services Administration. (2024). 988 Suicide & Crisis Lifeline Performance Metrics. samhsa.gov/find-help/988
  17. American Psychological Association. (2018). Guidelines for the Practice of Telepsychology (originally 2013). apa.org
  18. Athanasopoulou, C., & Dopson, S. (2015). What about telepsychiatry? A systematic review. Primary Care Companion for CNS Disorders, 17(4).
  19. Morse, G., Salyers, M. P., Rollins, A. L., Monroe-DeVita, M., & Pfahler, C. (2012). Burnout in mental health services: A review of the problem and its remediation. Administration and Policy in Mental Health, 39(5), 341–352.
  20. McHugh, R. K., Whitton, S. W., Peckham, A. D., Welge, J. A., & Otto, M. W. (2013). Patient preference for psychological vs pharmacologic treatment of psychiatric disorders: A meta-analytic review. Journal of Clinical Psychiatry, 74(6), 595–602.