Saturday, January 20, 2024

Support for Same Sex Marriage in the Americas


Let's look at the latest data from the Americas Barometer, a frequent survey of most countries in the Americas. I've used it before to examine state-level homophobia and transphobia in prior posts. Since my last losts, there is new data from 2023, and some new questions. So, just jumping right in, here is the mean of an 11-point scale of how strongly respondents approve (10) or disapprove (0) of same sex marriage (averaged from 2010, 2012, 2014, 2016, 2018 & 2023), and grouped by region. Higher values indicate higher average support for same sex marriage.


And the same information, in tabular format, indicates that the highest support of same sex marriage in the Americas can be found in Canada (7.36), Uruguay (7.18), Argentina (6.68), the United States (5.90), Chile (5.82), Mexico (5.46), and Brazil (5.42), with considerably lower support throughout the Caribbean, the northernmost countries of South America, and Central America.


Tuesday, March 28, 2023

Interview Completion: Are Sexual and Gender Minority People More or Less Likely to Engage in Research?

This post is part of a series about engagement of sexual and gender minority populations in survey research. For an overview, start here.

One of the most direct measures of research participation I'm looking at is interview completion, getting to the end of the survey. Certainly there are many reasons for cutting an interview short; some of these are more closely related to a lack of interest in engaging in the research effort than others, but on the whole, the more a respondent is engaged, the more likley they are to get to the end of the survey.

Alas, out of the 25 surveys I have tabulated so far, only 3 report whether the interview was completed (the National Health Interview Survey (NHIS), the Health Information National Trends Survey (HINTS), and the American National Election Survey (ANES). I also created  mwasures of interview completion for 2 more surveys (the Behavioral Risk Factor Surveillance System (BRFSS) and the Household Pulse Survey (HPS)) by looking at patterns of missing values―if all values sequentially after a given point in the interview are missing, then I code that as an interview termination. Developing those measures is time-consuming, and I doubt I'll do it for any others.

So, here's what I found in these 5 surveys, from most to fewest nummber of respondents:

Household Pulse Survey (waves 34-55, 2021-2023, Internet survey)
Overall, LGBT interview completion was a bit lower than among cisgender heterosexuals, lower among transgender respondents, perhaps higher for gay cismen, and lower for cismen of "another" sexual orientation. These are raw (weighted) percents, so to get an estimate of relative likelihood of interview completion after adjusting for respondent age, state or residence, and time trend, I estimated a logistic model to get adjusted odds ratios fro interview completion:
After adjustment, interview completion was actually higher for LGBT people averaged together (on this chart, 1.0 means equally likely as the comparison group), for cisgender sexual minority women, cisgender gay and bisexual men, and about the same as cisgender heterosexuals  for transgender respondents.


Behavioral Risk Factor Surveillance Survey (2014-2021, Telephone survey, restricted to states where SOGI items were asked in the demographics section)
In the crude rates, LGBT respondents were just about as likely to complete the interview as cisgender heterosexuals. Transgender people somewhat less likely, and as in HPS, gay and bisexual cismen more likely to complete the interview, while cismen of another seuxal orientation were less likely to complete the interview.
Again, I did a logistic model to adjust for respondent age and state of residence, and time trend.
After adjustment, LGBT people were slightly more likely to complete interviews, transgender people less likely to, gay and bi cismen more likely, and cismen of another seuxal orientation were less likely to complete interviews.


National Health Interview Survey (2014-2021, Face-to-face interviews)
Although I was able to combine several years of data here, the sample size of NHIS is considerably smaller than HPS or BRFSS, so the comparison between sexual minority adults and heterosexuals is robust, getting into some of the subgroups gets harder to interpret. NHIS did not collect gender identity, but they did identify people who said they weren't sure about their seuxal orientation.
Overall, sexual minority respondents were about as likely to complete interviews, and it looks like the questioning groups may have been less likely to complete the interview.
Again, a logistic model, adjusted for respondent age, region of residence, and time trend:
Overall, the relative likelihood of interview completion for sexual minority respondents was slightly higher, in the same range as the two larger surveys above. Subgroups are too small to interpret here.


Health Information for National Trends Survey (2017-2020, Internet & Mail)
HINTS is a very rich survey, lots of in-depth information about experience with cancer and beliefs about cancer prevention. However, with about 15,000 respondents after pooling 4 annual surveys, the sample is just too small to say anything confidently about sexual minority respondents relative to heterosexuals. Also, the reported interview completion rate is very high, which is great, but it also means there's not a lot of variation to look at from a statistical perspective.
I'm hesitant even to show model results because of this, but for the sake of completeness, here they are:



American National Election Survey (2016, 2020, mostly Internet, some face-to-face, televideo, and telephone interviews)
Really nothing to say about this survey, given that it is shy of 10,000 respondents, and again with such a high interview completion rate that there isn't much statistical variation to play with.


All 5 Surveys Together
The main value of looking at these 5 surveys, with different methodologies, covering different subject matter, and over (somewhat) different time frames is being able to look at them all together. Here are the results of the five logistic models for LGB(T) populations compared to cisgender heterosexuals, all on the same scale:
With only 5 surveys, it doesn't make sense to do a formal meta-analysis, especially given that they surveys are really quite different from one another. Nonetheless, it is reassuring to see that the three largest studies have relative completion rates that are compatible with one another (the 2 smaller studies are also compatible with these, but also compatible with such a wide range of alternate possibilities that they are simply not informative).
It may come as a surprise to some readers that LGBT people are, at least in terms of interview completion, more likely to engage in research, and thus perhaps slightly over-represented in research datasets.






Wednesday, February 22, 2023

Proportion LGBT in 20 Probability Samples

Six surveys asked about sexual orientation and gender identity:

01) Behavioral Risk Factor Surveillance System (2014-2021; 18+; Telephone; n=1,602,144)
    LGBT:   6.05%

02) Household Pulse Surveys (2021-2023; 18+; Internet; n=1,206,436)
    LGBT:   9.74%

03) National Crime Victimization Surveys (2017-2021; 18+; Face-to-face & telephone; n=760,408)
    LGBT:   2.39%

04a) Associated Press VoteCast (2018; 18+; Internet & telephone; n=39,864)
    LGBT:   7.73%

04b) Associated Press VoteCast (2020; 18+; Internet & telephone; n=34,868)
    LGBT:   9.27%

05) California Health Interview Survey (2021; 18+; Internet & telephone; n=24,441)
    LGBT: 10.95%

06) Collaborative Multi-racial Post-election Surveys (2012, 2016-2017; 18+; Internet; n=12,660)
    LGBT:   8.58%

Fifteen more surveys asked about sexual orientation, but not gender identity:

07) National Health Interview Survey (2014-2021; 18+; Face-to-face & telephone; n=240,719)
    LGB:   3.93%

08) National Drug Use and Health Surveys (2015-2020; 18+; Face-to-face; n=236,145)
    LGB:   5.21%

09) New York City Community Health Surveys (2001-2020; 18+; Telephone; n=155,714)
    LGB:   4.64%

10) Health Reform Monitoring Surveys (2013-2020; 18-64; Internet; n=147,203)
    LGB:   7.50%

11) National Adult Tobacco Surveys (2012-2014; 18+; Telephone; n=120,017)
    LGB:   4.22%

12) Canadian Community Health Surveys (2017-2018; 15+; Face-to-face & telephone; n=103,217)
    LGB:   3.35%

13) National Survey of Family Growth (2011-2019; 15-49; Face-to-face; n=41,174)
    LGB:   6.94%

14a) Population Assessment of Tobacco and Health, Wave 1 (2011; 18+; Face-to-face; n=31,515)
    LGB:   4.92%

14b) Population Assessment of Tobacco and Health, Wave 4 (2016; 18+; Face-to-face; n=33,415)
    LGB:   8.68%

15) Well-Being and Basic Needs Surveys (2017-2020; 18-64; Internet; n=27,449)
    LGB:   7.50%

16) National Health and Nutrition Examination Surveys (2001-2016; 18-64; Face-to-face; n=25,529)
    LGB:   5.38%

17) General Social Surveys (2008, 2010, 2012, 2014, 2016, 2018, 2021; 18+; Face-to-face, Internet & telephone; n=12,815)
    LGB:   4.72%

18) American National Election surveys (2016, 2020; 18+; Internet, face-to-face, video call & telephone; n=9,254)
    LGB:   6.57%

19a) Supplementary Empirical Teaching Units in Political Science (2016; 18+; Telephone, Internet & video call; n=3,464)
    LGB:   6.26%

19b) Supplementary Empirical Teaching Units in Political Science (2020; 18+; Face-to-face & Internet; n=7,089)
    LGB:   6.99%

20) National Social Life, Health, and Aging Project (2015-2016; 50+; Face-to-face; n=3,392)
    LGB:   2.41%

Thursday, February 2, 2023

Queer and Trans Representation in Research

WHAT IS REPRESENTATION?

    A common refrain in research on LGBT populations is that we are underrepresented in research. In many ways, that is undoubtedly true. Many data collection systems do not include items on sexual orientation, and even fewer include gender identity. And many of those that do are small enough that there is not enough of a queer/trans population to provide reliable estimates. Arguably funding for research on (not often enough with) sexual and gender minority populations is decades overdue and falls short of the mark. And publications about sexual and gender minority comprise a tiny fraction of the published scientific literature.

    And yet. The number of research datasets with reliable information on sexual orientation and gender identity has expanded rapidly, first in large survey datasets, more recently in infectious disease tracking systems, and coming soon in medical records datasets and large administrative databases. Funding has increased dramatically in recent years, and there are now established journals dedicated to sexual and gender minority population research.

    In a more limited sense of 'representation', we really don't know the degree to which sexual and gender minority populations are represented in these datasets - in other words, how much more or less likely are these populations to be included in research? With age, race/ethnicity, and geography, we can use Census records to compare the distribution of people included in a study with respect to their expected distribution in the population, and "sex" in a broad sense, although this breaks down when considering gender identity. When we know these distributions, we can re-weight the analytic dataset to reflect the population at large.

    But, with sexual and gender minority populations, there is no Census standard - in fact, these surveys themselves are the closest thing we have to a standard. But there is considerable variability from one survey to another in terms of the proportion of people identifying as sexual and gender minorites, as well as variation in the questions asked - and responses options offered.


DATA SOURCES

    In a series of posts I plan to explore here, I'll be looking at representation in this narrow sense (likelihood of responding to an invitation to engage in survey research) across a wide range of large probability surveys in the US, namely:

    Behavioral Risk Factor Surveillance System (2014-2021)

    Household Pulse Survey (2021-2023)

    National Health Interview Survey (2013-2021)

    National Health and Nutrition Examination Survey (1999-2019)

    National Survey of Drug Use and Health (2015-2020)

    National Survey of Family Growth (2011-2019)

    National Adult Tobacco Survey (2012-2014)

    Population Assessment of Tobacco and Health (2011, 2016)

    California Heath Interview Survey (2021)

    New York City Community Health Survey (2003-2020)

    National Crime Victimization Survey (2017-2021)

    Health Reform Monitoring Surveys (2013-2020)

    Well-Being and Basic Needs Survey (2017-2020)

    General Social Survey (2008, 2010, 2012, 2014, 2016, 2018, 2021)

    American National Election Surveys (2016, 2020)

    Collaborative Multiracial Post-election Surveys (2012, 2016, 2017)

    Associated Press VoteCast (2018)

    These 17 large surveys with public use data reflect a broad range of sampling strategies (random telephone dial, internet recruitment from Census lists, panels recruited by established survey firms, quite a few in-person interviews based on physical addresses, and one using televideo interviews), on a variety of topics (puplic opinion polling, health surveys, crime), using a variety of question wording and response options. They are heavily weighted towards recent years, but there are some going back decades. Is there a dataset you think I've overlooked? Let me know!


MEASURING REPRESENTATION

    If there are no Census data (or other standard) for the distribution of sexual and gender identity data, how do I propose to look at relative representation of these groups in these research datasets? Indirectly. I plan to use measures that are fairly inuitive correlates of research participation, as determined in prior research.

    One of the most intuitive is how many attempts it took to get a successful interview. Presumably, people who answer the call to participate immediately are "easy" interviews, and those who take 20-50 attempts to connect with are less eager to participate. So, we can look at the distribution of how many contact attmepts were made to connect for an interview as a proxy for eagerness to participate. Alas, this measure is only reported publically in 2 of the above studies.

    Another fairly intuitive proxy is how likely a respondent is to complete the interview once started. Presumably people who hang on to the end of an interview are more invested in the research endeavor than those who break off after a short period. This measure is available for 3 of the above surveys - many of these surveys only report out complete interviews (or impute values for missing data) so that there are no "short" interviews to compare to. For others, the sexual orientation and/or gender identity items are asked late enough in the interview that there is not information on these items among those who cut the interview short.

    A measure that seems to make sense (but may be less useful than it appears) is the weight assigned to a respondent. If the weighting system the survey is using works well enough, respondents who are harder to reach will have a higher weight, and those easy to reach will have a lower weight. The factors that go into these weights typically include sex, race/ethnicity, age, geography, how many phones the respondent uses and how many people could answer the phone, and interactions betweeen these factors. So, to the degree that how much more or less likely sexual and gender minority people are to respond because of these factors, the relative weighting could be informative. But, to the extent that sexual and gender minority respondents elect to engage with researchers is related to being LGTBQ over and above those delineated factors that go into the weighting, the relative weights will fail to reflect participation.

    A less intuitive measure is called the "fraction of missing information", or how many items the respondent used a "don't know" or "not sure" response, or declined to answer. Presumably, a person who declines to answer a larger proportion of questions is less invested in the research than a person who answers everything. Of course, there are many reasons to leave questions blank or say "don't know" that have nothing to do with eagerness to participate. And I have to be careful to distinguish between questions left blank because the respondent heard it and didn't answer vs. the question was skipped on purpose, or skipped because the interview already ended. Another difficulty with this measure is that an awful lot of people answer every question, even when they truly don't know or aren't sure, so the median number of blank items is 0, a rough distribution to work with from a statistical perspective. On the plus side, these measures are available for all the studies above, except one that imputes missing and don't know values to "known" values before releasing the public use datatset.

    I've gone ahead and split these missing information measures into two categories: one based on demographic items (race/ethnicity, marital status, educational attainment, employment status, income, household composition, citizenship, language), and another based on all other items on the survey, which I've called 'substantive items' for lack of a better generic term for "everything else", whether that be a history of cancer to presidential candidate preference.


STATEMENT OF EXPECTED HYPOTHESES

    What do I expect to see in all this? It's hard to say for sure, which is what makes it especially interesting to me. I do expect to see heterogeneity. I expect to see greater participation from sexual and gender minority populations from Internet-based recruiting than telephone, for instance. I expect to see greater participation from gay men than lesbian women, and greater still than bisexual men and women; from cisgender people than transgender. Overall, I think that LGBT people will probably be somewhat more likley to participate in research, but if I had to guess, I'd say the difference is probably pretty small, compared to differences in participation related to age, race/ethnicity and sex. I suspect that the variation between sexual and gender minority groups will be greater than the difference between LGBT people as a whole and cisgender heterosexual adults.

    I would say I don't have a strong expectation about participation between transfeminine and transmasculine people. I don't have as solid a foundation of experience to draw from. I'm also not sure about what to expect about younger or older LGBT people relative to younger or older cisgender heterosexuals, or about LGBT people belonging to minoritized racial/ethnic groups relative to cisgender heterosexual non-Hispanic Whites.


WHAT YOU SHOUOLD EXPECT

    Over the next weeks to months, I plan to post a variety of analyses here related to this topic. Expect to see analyses based on one survey at a time. Expect to see an analysis of the same quetion or proxy outcome across multiple surveys. Expect to see analyses of missing data due to particular items across multiple surveys and populations. Expect to see analyses looking at trends over time, differences across survey methodologies, differences with respect to survey topics (drug use, general health, crime victimization, politics). In other words, this topic is too big (at least in my mind) for synthesis into a single paper for publication. I want to explore it with you and figure out along the way what the paper(s) within the topic are to pursue for publication in a more formal setting.

Sunday, July 31, 2022

How Transphobic is my State (Part I)

 I think it's safe to say that the social climate in every state can be characterized as transphobic. It also depends on who you ask, and what you ask them.

But if some states are more transphobic than others, is there a way to measure those differences in degree?

To date, most researchers have used measures based on legislation and policy to describe the climate of each state. The Movement Advancement Project has the most comprehensive listing of policies affecting transgender, genderqueer, non-binary & agender people in the United States.

Public opinion is also a promising way to measure the transphobic climate of the states, and a number of polling firms have reported public opinion on items related to transphobia. A few recent examples include: NPR reporting that a majority of Americans oppose allowing girls and women who are trans to participate in womens' sports; Pew reports that fewer than half of Americans favor requiring health insurance to cover gender affirming therapies; and Gallup reports that support for transgender people being able to serve openly in the military has declined since 2019. For a comprehensive assessment of Americans' views, check out this poll conducted by IPSOS on behalf of the WIlliams Insititue. Unfortunartely, none of these polls report state-level results in a way that researchers like me can use them to develop measures of state-level climate, as I was able to do for measures of homophobia derived from the AmericasBarometer and the American National Election Survey.

For today's blog, I have to thank Maggi Price (@MaggiPrice) and colleagues for bringing to my attention (through a terrific preprint, don't know if I can link to it) that Project Implicit publishes a dataset ideal for assessing state-level transphobic attitudes. It's a bonanza and I've been working on for a couple weeks & am finally ready to share findings with you-all!

OK, here are the first set of findings, methodology below:


The diamonds indicate the average score for each state, after weighting to the state population (more details below). The states are ranked from lowest mean transphobia as assessed on a 9-item scale (Vermont) to highest (Guam, or Mississippi if you're just interested in states) on this particular measure. The vertical whiskers are 95% confidence intervals indicating the degree of uncertainty in the state-specific measures. A long line incidates a low level of certainty in the state's score, and a short line indicates a hgiher degree of certainty - but it is worth noting that none of these confidence intervals are short enough to have a high degree of confidence the the exact ranking of each state. Also, these confidence intervals are based on sampling variation only, and are thus far smaller than they should be if I had accurately accounted for additional sources of uncertainty - a decent rule of thumb is that the confidence intervals from a sample like this should be about doubled in size. The absolute values on the y-axis (15-50) have no simple direct interpretation, it is the sum of 9 items-see details below to how they are calculated.

Methodology

A couple generic methodologic notes that apply to all measures I've developed from this Project Implicit dataset. The survey is designed to measure an individual's implicit bias against or for a given group (usually a minoritized or marginalized group), based on how quickly they associate "good" and "bad" words with that group compared to another group (usually the socially dominant group). The survey also asks a number of questions about explicit bias. For my purposes, I am not terribly interested in how individuals respond, but the overall tenor of a state (or in the future, metro area or smaller geography, as sample size allows), and I am interested in both the implicit and explicit measures.

This is a "convenience" sample, meaning that the questionnaire was filled out by whoever elected to do it, not from a systematic sample of Americans. Thus it is important to pay attention to who elected to fill it out, especially since I am trying to estimate these measures for the state as a whole, not for the people who elected to take the survey. Some people took it as part of a school project, or because they were encouraged to (perhaps even required to) by their employer. Some people took it because they heard about it in the news, on social media, or from a friend. Presumably, those required to take it may be more representative of the state climate (thanks to Jarvis Chen for this insight), because they would be closer to a representative population, but I've decided (for now anyway) to include respondents regardless of their reason for taking the test. In any event, a large number of people have done so (about 200,000 over 2020-2021 for the Transgender Implicit Associations Test), and I have constructed weights based on people's state of residence, sex (admittedly an imperfect variable in this case!), age group, and race/ethnicity, to attempt to make the sample more closely resemble the general population of each state according to the Census's July 2020 population estimates (or as close as I could get to that for the teritories of American Samoa, Guam, Puerto Rico and the US Virgin Islands). (In the future, watch for variations on this weighting theme, such as weighting for educational attainment, gender identity, policital ideology, county-level geography, etc. Jarvis gave me many great ideas about trying out different weighting schemes).

I'd also like to explore, in future variations, restricting to respondents aged 18-64, and/or those who are cisgender, to reflect the attitudes in the socially dominant population, but for now, I have included all respondents aged 10 to 85+, and of all gender identities.

There is some evidence of occasional "goofball" respondents (like people who say that they are all races, and that they are all genders (cismale, cisfemale, transmasculine and transfeminine), etc.), but I have not yet done anything about excluding or assigning low weights to the goofballs. It can be a bit of a tightrope walk defining who is a goofball and who is not - there is a risk of assigning someone with multiply marginalized positions as a goofball by mistake.

Eager to hear any suggestions people may have for further refinements!

Now, on to the specific measures:


Nine-Item Transphobia Scale

This is a measure of explicit bias against transgender people, measured using a 9-item scale, with responses ranging from strongly disagree (1) to strongly agree (7) on a 7-point scale. Thus, the scale can range from 7 to 63. Here are the items:

  1. I don't like it when someone is flirting with me, and I can't tell if they are a man or a woman.
  2. I think there is something wrong with a person who says that they are neither a man nor a woman.
  3. I would be upset, if someone I'd known a long time revealed to me that they used to be another gender.
  4. I avoid people on the street whose gender is unclear to me.
  5. When I meet someone, it is important for me to be able to identify them as a man or a woman.
  6. I believe that the male/female dichotomy is natural.
  7. I am uncomfortable around people who don't conform to traditional gender roles, e.g., aggressive women or emotional men.
  8. I believe that a person can never change their gender.
  9. A person's genitalia define what gender they are, e.g., a penis defines a person as being a man, a vagina defines a person as a woman.

I required answers to all 9 items, a future version may include multiple imputation to be able to include respondents who missed a couple items - often people skip items for good reasons (like the wording doesn't make sense, or they just don't know how they feel), and it is a shame to exclude people. I also didn't do anything fancy like factor analysis or trying to account for the relative importance of each item-I just added them together.

This scale was asked of about half of the people taking the test. I have done nothing special to account for that fact - notably the weights are the same as those for the whole dataset, not in any way adjusted to reflect the subsample that was given this scale.

I think it's worth noting that these items don't talk about transgender people explicity. Arguably, these items are more about a form of sexism I call "stereosexism" - the notion that there are but two genders and that these are permanent attributes. However, whatever that concept is, it is arguably very closely linked to transphobia.


Single-Item Preference for Transgender People


This next measure is based on a single item, with 7 possible responses:
  1. I strongly prefer transgender people to cisgender people. 
  2. I moderately prefer transgender people to cisgender people.
  3. I slightly prefer transgender people to cisgender people.
  4. I like cisgender people and transgendper people equally.
  5. I slightly prefer cisgender people and transgendper people.
  6. I moderately prefer cisgender people and transgendper people.
  7. I strongly prefer cisgender people and transgendper people.
Note, I copied these responses from the data documentation, including the misspelling of "transgendper" instead of "transgender" for responses 4-7, and presumably a typo in items 5-7 "cisgender and" instead of "cisgender to". I assume these are errors in the documentation, not in the original survey, but buyer beware...

For the chart above, I've re-ranked the states and territories from the least likely to prefer cisgender people (Vermont) to the most likely (Arkansas), with the diamonds indicating the average response for the tates's residents, and the vertical whiskers indicating the 95% confidence interval based on sampling variability only (and again, these should be interpreted as though the confidence intervals are about twice as wide as they are calculated, given that this is not a random sample, and due to the influence of additional sources of variation not accounted for in the standard calculations).

This item was asked of everyone who took the test (but not answered by some). I have done nothing special to the weights to account for missing-ness on this item, just included people who actually answered.

Those of you with eagle eyes for this sort of thing may have noticed that, even though this measure has nearly double the sample as the transphobia scale above, the uncertainty around these state-level estimates are generally larger than for the 9-item scale (on a relative scale), meaning that the relative ranking of states and territories on this measure is even less stable than the previous measure. Could be a bunch of reasons for that - my leading hypothesis would be that the 9-item scale has lower variability between individuals (which is just a way of saying it is more stably measured, not that it is a closer measure of transphobia). I do like that this measure directly asks people about transgender and cisgender people, that seems much more of a direct linkage to me than the items in the scale above. On the other hand, many people (particularly the cisgender majority) do not have a strong internalized notion of what "cisgender" means, and may not be able to answer this item accurately, even though IAT defines the terms.

Implicit Bias

You may well wonder why I didn't lead with the implicit bias measure, especially since this is the whole purpose for the data collection by Project Implicit in the first place!
The main reason I put this third is that it is the measure I have the least familiarity with, and thus the least confidence I am aware of potential measurement issues inherent to it. To be completely honest with you, I have not delved into the methodology deeply enough to even be able to describe how this measure is calculated, let alone what the scale (y-axis) means.
As before, I have ranked the states and territories from the least implicit bias against transgender people (Guam, or Utah if you're only counting states) to the most implicit bias (North Dakota), with the diamonds representing my method's best guess as to the average level of anti-trans implicit bias in the state, and the vertical whiskers indicating 95% confidence intervals - with the wider ranges meaning less confidence and the narrower ranges higher confidence.
Although this measure was asked of all respondents, it had more missing-ness than the single-item preference for transgender people measure above - and I have not looked into why these are missing - could be that people got tired out before the test was finished, could be that there is a data-cleaning step to remove measures if people were too slow or too quick to answer, or some other factors I haven't thought of.
Another concern I have is that many respondents are pretty unfamiliar with the term "cisgender", and the implicit association test, which counts on a knee-jerk rapid response comparing "transgender" and "cisgender" may not adequately tap into this automatic level of thinking if people have to keep reminding  themselves what the terms mean, as opposed to "Black" and "White" which are concepts most Americans have deeply emplanted understandings of.
It also appears that the variability around each state's measure is higher still, relative to the variability between states, suggesting that the relative ranking between states is less stable than the other two measures. I'm worried about reading too much into that at this stage, but initally, it suggests to me that either implict bias is tough to measure precisely, or that the level of implicit bias is relatively uniform (and presumably high) across the country, whereas for the explicit measures, they may be tapping into how socially acceptable it is to express bias, which may well vary more from state-to-state.

Contact with Transgender People

And one more measure - this one is about contact with transgender people. It is based on 4 items, each with a yes/no response. I have simply added these up, resulting in a scale from 4 (no contact) to 8 (contact in each category):
  1. Do you have a family member who is transgender?
  2. Do you have a friend who is transgender?
  3. Do you have friendly interactions with transgender people on a regular basis?
  4. Have you even met a transgender person?
These items were asked of nearly half the people who took the test (with no overlap bewteen this half and the half who were asked the 9-item transphobia scale). I have only included people who answered all 4 items, and not done anything to account for those who are missing one or more, and as before, used the overall weighting, nothing special for this particular measure.

As with the other measures, I have ranked the states, this time from the least contact (The US Virgin Islands, or West Virginia) to the states with the highest levels of contact (Guam, or Vermont). Note that the ranking on this measure is conceptually in the opposite direction from the prior measures, but generally the states end up in nearly the same ranking (once you take the reversed ranking into account).

I think it's interesting that the variabilty between states is pretty low on this measure, with most states bunched in a pretty narrow range indicating an affirmative response to 1-2 of the items above.

This type of measure is often interpreted as a precursor to people's attitudes towards a group, under the "Contact Hypothesis" elaborated by Allport in the 1950's, fairly early on in America's fascination with sociology and efforts to understand how eugenics in World War II and racializing attitudes in post-war America came into the mainstream of public thought and political action.
However, I think it has a much more complicated relationship than simply reflecting prior contact as a driver of current attitudes. In this case, that people who are aware of contact with transgender people are more likely to have had some awareness and openness to that possibility before they meet (or become aware of) transgender people in their lives. Also, I would argue that the state climate is likely to be a strong determinant of how open people are to be about being transgender (and thus come to the conscious attention of others), and potentially even of recognizing that experiences around gender are compatible with a transgender identity. Thus, I think these contact measures are better considered as reflections of the state climate than causes of it.

Closing Queries
Hey, thanks for bearing with me through all this. I am super interested to hear your thoughts...
  • about the conceptual idea of measuring state-level transphobia - does this even make sense to you? Does this kind of quantification seem viable to you?
  • what the different measures are telling you about the contours os between-state variation in transphobia - are these 4 measures all measuring the same underlying construct or are the differences between contact, implicit bias, and the 2 explicit measures of transphobia speaking to you about something more than there being a single "thing" to measure?
  • how could we use these measures to examine the causal role of state-level climate on mood disorders? Gender euphoria? The impact of inhibiting gender expression on stunting the development of cisgender, as well as transgender, people?
If you want to chat, but don't want to leave a public comment, go ahead and get to me through twitter @billandtuna. I look forward to hearing from you.

Tuesday, February 1, 2022

How Homophobic is my State (Part II)

 This is the second post in a series on using public opinion polling to assess structural heteronormativity. The first (based on the LatinoBarometer surveys) can be found here.

For today's installment, I collated responses to a "feeling thermometer" from the American National Election Survey, a survey of Americans of voting age conducted to assess a wide variety of attitudes and voting pattern data. For this measure, I averaged responses from surveys conducted in advance of major elections in 2008, 2012, and 2016. Please note that these averages DO NOT use the appropriate weighting supplied with the data, and are not adjusted for anything other than which election wave the data were collected in. Thus, the means may be off what they should be (either higher or lower, impossible to predict), and the confidence intervals are probably narrower than they should be.

The question wording for the "feeling thermometer" is reproduced below, after which a number of groups are asked about - the order of the groups presented is randomized from one respondent to another.

"We’d also like to get your feelings about some groups in American society. When I read the name of a group, we’d like you to rate it with what we call a feeling thermometer. Ratings between 50 degrees-100 degrees mean that you feel favorably and warm toward the group; ratings between 0 and 50 degrees mean that you don’t feel favorably towards the group and that you don’t care too much for that group. If you don’t feel particularly warm or cold toward a group you would rate them at 50 degrees. If we come to a group you don’t know much about, just tell me and we’ll move on to the next one."

"Gay Men and Lesbians"

As before, the states are ranked from least heteronormative rating to most heteronormative rating.



Friday, January 21, 2022

How Homophobic is my State?

I've been working on developing measures of heteronormativity for decades. Typically, I end up using legislation / policy because these measures are 1) tightly related to the concept of structural heteronormativity (homophobia at the societal, not individual, level), 2) easy to explain to others, 3) perhaps more importantly, legislation points to a direct method for redressing heteronormativity.
Another approach is to look at public opinion. In some ways, this is a more direct measure of public sentiment about QTBLG people, but it is often difficult to get access to data that is based on a large enough group of people, reported in sufficient detail to make state-level estimates, and uses consistent methodology across enough surveys to make decent state-level estimates.
Here, I present data from the AmericasBarometer that allows state-level estimates of some decent questions for the United States (and many other countries in North, Central, and South America and the Caribbean).
In order to generate these estimates, I combined data collected in multiple waves (2006, 2008, 2010, 2012, 2014 & 2016). There is definitely a time trend towards greater acceptance over this time period, but just to simplify things, I'm ignoring that for now. I have also suppressed estimates for any state based on fewer than 20 respondents (across all 6 waves combined), which means I can't rank Alaska, North Dakota, or Wyoming.
Below are (weighted, but otherwise unadjusted) estimates for three questions asked in the AmericasBarometer survey. As you can see, there is very wide variability across the states, and a decent correlation between the results of the different questions.

D5: And now on a different topic, thinking about homosexuals, how strongly do you approve or disapprove of such people being permitted to run for public office?
¿Con qué firmeza aprueba o desaprueba que los homosexuales puedan postularse para cargos públicos?
Scaled 1-10, with 1=strongly disapprove, 10=strongly approve (asked in all waves)
States ranked by mean score, from least homophobic to most homophobic:



D6: How strongly do you approve or disapprove of same-sex couples having the right to marry?
¿Con qué firmeza aprueba o desaprueba que las parejas del mismo sexo puedan tener el derecho a casarse?
Scaled 1-10, with 1=strongly disapprove, 10=strongly approve (asked in 2010, 2012, 2014 & 2016)
(Data suppressed for District of Columbia, Hawai'i, Idaho, Montana, Rhode Island & Vermont)
 

DIS35a: Now you are going to read a list of several groups of people. Can you check off any groups that you would not want to have as neighbors? Gays. Would you mind having them as neighbors?
Vamos a mostrarle una lista de varios grupos de personas. ¿Podría decirme si hay algunos de ellos que no le gustaría tener como vecinos? Homosexuales. ¿No los quisiera tener de vecinos?
Yes=1, No=0 (asked only in 2012)