Providing Healthcare Services for the Formerly Homeless

Providing Healthcare Services for the Formerly Homeless


The evaluation of the HEALTH project, a healthcare program providing services for the formerly homeless, focused on three principal outcomes: services delivered, access to acute and chronic healthcare, and health status. The services of the project were delivered to the intervention sites between August 1999 and July 2001. Residents at both the intervention and comparison sites were sampled on three occasions. The baseline surveys were conducted as personal interviews while both six and 18-month follow-ups were completed using group administration methodology.

The surveys administered in a group setting indicate that respondents may be less likely to ask questions as well as become more easily distracted. However, as found through a method in research study, “the group administration method saves time and money and introduces no more measurement error than personal interviews.” There was also limited physical evaluations including blood pressure, vision tests, and oral cavity, in the healthcare program, which were conducted by a physician and trained medical students. These physical tests were offered at various times of day in order to ensure that all interested residents, of the healthcare program, had an opportunity to participate. In some instances, these scheduled healthcare tests conflicted with participant’s work or other obligations. It will be addressed later the effect that this reality may have had on measurable outcomes.

The quasi-experimental design most accurately reflected in this evaluation is the Nonequivalent Control Group Design. According to Campbell and Stanley, in Experimental and Quasi-Experimental Designs for Research, this experimental design (#10) is one of the most widespread experimental designs…in which the control group and experimental group do not have pre-experimental sampling equivalence.

Administrators conducting the healthcare evaluation were not able to randomly assign participants for the healthcare program. However, “despite high resident flux, population characteristics were remarkably stable at all sites.” For each outcome, a separate regression model was fit on the data from each of the two follow-up sampling occasions. These models used analysis of covariance techniques to control for effects of baseline between site differences on the study outcome. Inferences about the effect of the healthcare intervention were based on estimated regression coefficients and corresponding asymptotic test statistics for an indicator variable (dummy variable: 1=respondents at HEALTH project sites, 0=residents at comparison sites).

The receipt of planned healthcare services intervention intended to increase medical, dental, and mental health, with referral for outside treatments when appropriate. This was accessed by patient surveys where respondents were asked about recent healthcare treatments and discussions with program medical staff. Access to healthcare was more accurately accessed as respondents were asked to rate their satisfaction on a numerical scale. Finally, health status outcomes were accessed using a scoring method of a Physical Functioning Scale and Mental Health Scale.

As far as statistical analysis, the regression adjustment included effects for sex, age, race, and education determined by identified predictors with significant and stable effects over time. Coefficients that were used in regression were estimated using baseline data, and inferences of HEALTH project effects accounted for site-level assignment of the intervention. Any measures of central tendency and dispersion for all variables of interest were compared, for the healthcare program, between intervention and comparison sites.

The results from this healthcare program evaluation consisted of 773 observations, derived from a total of 609 individuals: 464 with one observation, 126 with two observations, and only 19 with all three observations recorded. The evaluation of the healthcare program was able to report that female residents at intervention sites had significantly higher tendency than comparison residents to have participated in a Pap smear in the past year. Also residents at intervention sites had a significant decrease in mean blood pressure. There is a clear indication that healthcare services directly administered by the Integrated Service Team (IST) showed more successful outcomes. An unexpected adverse outcome, which further supports that point, is that residents at intervention sites reported difficulty-obtaining transportation to outside healthcare services. It should also be noted that there was neither significant effect on residents’ perception of change in their health status, nor any increases in activities due to improvements in physical health.

Overall, this evaluation provides a generous effort given the limited sample size, lack of program infrastructure in place, and diversity of outcome goals and measurements. The main threat to internal validity realized in this evaluation is the high healthcare program attrition rates. As noted in the evaluation, “a declining proportion of subjects enrolled in the evaluation at baseline were available for inclusion in the cross-sectional surveys conducted at 6 and 18 months post-intervention.”

Residents who arrived between sampling occasions were allowed to participate in follow-up surveys. According to Rossi, when assessing the healthcare program impact, “the evaluator should attempt to obtain outcome measures for everyone in the intervention group whether or not they actually received the full program… If full outcome data is obtained, the validity of the design for comparing the two groups is retained .” While this is encouraging for maintaining internal validity in this particular evaluation, the fact that less than 4 percent of the sample was available for evaluation at all three assessments is a staggering statistic that should be corrected for in future evaluations.

The HEALTH evaluation notes that while the comparison sites served the same population as intervention sites, the comparison sites “were affiliated with an organization that was not invited to participate in the initial phase of the HEALTH project.” This may be an indication that the intervention sites were predisposed to better provide healthcare services even despite implementation of the HEALTH program. There is also a question of IST staff, who are effectively the observers in this healthcare experiment. As noted in Designs for Research, “observers should be kept ignorant as to which students are receiving which treatments, lest the knowledge bias their ratings or records.” In the evaluation presented, there is not an indication that cross-sectional surveys were conducted blindly, suggesting possible testing bias concerns.

External validity is also a concern, as this study was only conducted in the Sacramento, CA area. Implementation of any healthcare program would greatly depend on local support staff and facilities, as well as other difficulties from specific local populations and transportation issues.

The delivery of HEALTH intervention may have been negatively affected by problems related to the healthcare program implementation. As noted earlier, there are multiple reports within the healthcare program from participant’s survey and physical testing appointment times conflicting with work or other obligations. Not only could this factor have greatly attributed to the attrition rates, but also responders and observers may have felt rushed or dismissive in regards to participating in the healthcare study. Other studies regarding healthcare to the homeless have not been published using experimental techniques. There are policy framework discussions such as a Boston program, which serves upwards of 11,000 people; however the program has not been evaluated with a counter-factual element.

Written by Frederick Bates

Photo Source:

Michael Havens Flickr License


  • F. Molitor et al., “Methods in Survey Research: Evidence for the Reliability of Group Administration vs Personal Interviews.,” American Journal of Public Health 91, no. 5 (2001): 826.
  • Donald T Campbell and Julian C Stanley, Experimental and Quasi-Experimental Research. (Houghton Mifflin Co., 1963).
  • Ciaranello et al., “Providing Health Care Services to the Formerly Homeless.”
  • Peter H. Rossi, Evaluation: A Systematic Approach, 7th ed (Thousand Oaks, CA: Sage, 2004).
  • Ciaranello et al., “Providing Health Care Services to the Formerly Homeless.”
  • Campbell and Stanley, Experimental and Quasi-Experimental Research.


Leave a Reply