Philadelphia’s Changing Voting Blocs

In the past, I’ve relied heavily on what I call Philadelphia’s Voting Blocs, groups of Divisions that vote for similar candidates. These provide a simplified but extremely powerful way to capture broad geographic trends in candidates’ performance. They’re built on top of the same methodology that powers the Turnout Tracker and the Needle.

One thing that’s always bothered me is that I’ve assumed the Blocs were the same in 2002 as they are today. I wasn’t allowing the boundaries to change. As someone who literally has a Ph.D. in measuring the movement of emergent neighborhood boundaries, this is off brand.

Today, I’ll relax that assumption to fit a model that allows the Blocs to change over time.

The old, time-invariant Voting Blocs

First, here’s how the Blocs were modeled until today. The source data is a giant matrix with rows for each divisions and columns for each candidate from elections since 2002. I model the votes \(x_{ij}\) in division \(i\) for candidate \(j\) as \[ \log(E[x_{ij}]) = \log(T_{iy_j}) + \mu_j + U_i’DV_j \] where \(T_{ir_j}\) is the turnout in Division \(i\) for candidate \(j\)’s race (\(r_j\)), \(\mu_j\) is a candidate mean, \(U_i\) is a \(K\)-length vector of latent scores for division \(i\) (I’ll use \(K=3\)), \(V_j\) is a \(K\)-length vector of latent scores for candidates, and \(D\) is a \(K\) by \(K\) diagonal matrix of scaling factors. My original Voting Blocs I didn’t directly fit this, but instead calculated \(\hat{\mu_j}\) as the sample mean of \(\log(x_{ij}/T_{r_j})\) and then used SVD to calculate matrices \(U\), \(D\), and \(V\) on the residual.

The result was a set of latent scores for divisions and candidates: candidates with positive scores in a dimension did disproportionately well in divisions with a positive score in that dimension, and disproportionately poorly in divisions with a negative score (vice versa for candidates with negative scores, the sign is arbitrary).

Here are those dimensions:

View code
library(tidyverse)
library(sf)
library(magrittr)

# setwd("C:/Users/Jonathan Tannen/Dropbox/sixty_six/posts/svd_time/")

source("../../data/prep_data/data_utils.R", chdir=TRUE) # most_recent_file
source("../../admin_scripts/util.R")

PRESENT_VINTAGE <- "201911"
OFFICES <- c(
  "UNITED STATES SENATOR", "PRESIDENT OF THE UNITED STATES",
  "MAYOR", "GOVERNOR", "DISTRICT ATTORNEY", #"DISTRICT COUNCIL", 
  "COUNCIL AT LARGE", "CITY COMMISSIONERS"
)

df_raw <- most_recent_file("../../data/processed_data/df_major_") %>%
  readRDS() %>%
  mutate(warddiv = pretty_div(warddiv))

MIN_YEAR <- 2002 

df <- df_raw %>% 
  filter(
    candidate != "Write In",
    substr(warddiv, 1, 2) != "99", 
    (election_type == "primary" & substr(party, 1, 3) == "DEM") | election_type=="general", 
    office %in% OFFICES
  ) %>%
  filter(year >= MIN_YEAR) %>%
  group_by(year, election_type, warddiv, office) %>%
  mutate(total_votes = sum(votes)) %>%
  ungroup() %>%
  # filter(total_votes > 0) %>%
  mutate(
    pvote = votes / total_votes,
    candidate = factor(candidate),
    warddiv = factor(warddiv),
    year = asnum(year) - MIN_YEAR
  ) %>%
  group_by(year, election_type, office) %>%
  mutate(ncand = length(unique(candidate))) %>%
  filter(ncand > 1) %>%
  group_by(year, election_type, office, candidate) %>%
  # filter(n() > 1703) %>%
  ungroup()

df %<>%
  unite(candidate_key, office, candidate, party, year, election_type, remove=FALSE)

candidates <- df %>% 
  group_by(candidate_key, year) %>%
  summarise(
    pvote_city = sum(votes) / sum(total_votes),
    mean_log_pvote = mean(log(votes+1) - log(total_votes+ncand)),
    .groups="drop"
  )

df_wide <- df %>%
  left_join(candidates) %>%
  mutate(resid = log(votes + 1) - log(total_votes + ncand) - mean_log_pvote) %>%
  dplyr::select(candidate_key, warddiv, resid) %>%
  spread(key=candidate_key, value=resid)

mat <- as.matrix(df_wide %>% dplyr::select(-warddiv))
row.names(mat) <- df_wide$warddiv

mat[is.na(mat)] <- 0

K <- 3
svd_0 <- svd(mat, nu=K, nv=K)

U_0 <- data.frame(
  warddiv = row.names(mat),
  alpha = svd_0$u,
  beta = matrix(0, nrow=nrow(mat), ncol=K)
)

V_0 <- data.frame(
  candidate_key = colnames(mat),
  score=svd_0$v
) %>%
  left_join(candidates %>% dplyr::select(candidate_key, mean_log_pvote))

D_0 <- svd_0$d[1:K]
View code
divs <- st_read(
  sprintf("../../data/gis/warddivs/%s/Political_Divisions.shp", PRESENT_VINTAGE)
) %>%
  mutate(warddiv = pretty_div(DIVISION_N))map_u <- function(U, D, years=c(2002, 2020), dimensions=1:K){
  U_gg <-  U %>% 
    pivot_longer(cols=c(starts_with("alpha"), starts_with("beta"))) %>%
    separate(name,into=c("var", "num"), sep="\\.") %>%
    pivot_wider(names_from=var, values_from=value) %>%
    left_join(data.frame(num=as.character(dimensions), d=D)) %>%
    inner_join(
      as.data.frame(
        expand.grid(num = as.character(dimensions), year=years)
      )
    ) %>%
    mutate(val = (alpha + beta * (year-MIN_YEAR)) * d)
  
  ggplot(
    divs %>% left_join(U_gg)
  ) + 
    geom_sf(aes(fill=val), color=NA) + 
    scale_fill_gradient2("Dimension\nScore", low=strong_red, high=strong_blue) +
    facet_grid(num ~ year, labeller=labeller(.rows=function(x) sprintf("Dim %s", x))) +
    theme_map_sixtysix() %+replace% 
    theme(legend.position = c(1.5, 0.5), legend.justification="center") +
    ggtitle(title)
}

map_u(U_0, D_0, 2002) + ggtitle("SVD Results")

To those familiar with Philadelphia’s racial geography, Dimension 1 has clearly captured the White-Black political divide (or, similarly, the Democrtic-Republican one). It’s important to remember that the algorithm has no demographic or spatial information. Any spatial patterns or correlations with race are simply because those divisions vote for similar candidates.

The candidates who did disproportionately best in the red divisions are all Republicans: John McCain in 2008, Mitt Romney in 2012, Sam Katz in 2003. In Democratic primaries, the candidates who did disproportionately well were Hillary Clinton in 2008 and John O’Neill in 2017. Remember that this map adjusts for a candidate’s overall performance. So it’s not that John McCain won the red divisions, but that he did better than his citywide 16%.

Conversely, the candidates who did disproportionately best in the blue divisions were Chaka Fattah in the 2007 primary, Tariq Karim El-Shabazz in 2017, and Anthony Hardy Williams in 2015.

The second dimension is weaker than the first (as measured by \(D\) and exemplified as the weaker colors in the map). It captures candidates who did disproportionately well in Center City and the ring around it, and in Mount Airy and Chestnut Hill.

The candidates who did disproportionately best in the blue divisions were all third-party Council challengers: Andrew Stober in 2015, Nicolas O’Rourke in 2019, Kendra Brooks in 2019, and Kristin Combs in 2015. The candidates who did disproportionately best in the red divisions were John O’Neill and Michael Untermeyer in the 2017 DA primary and Ed Neilson in the 2015 Council primary.

The third dimension is the weakest, and has identified an interesting pattern: lumping together the Northwest, parts of the Northeast, and deep South Philly as blue, and Hispanic North Philly with Penn and other young sections of the city as red. This dimension has identified Democratic party power: the candidates who did disproportionately best in the blue divisions all had strong party backing, including Edgar Howard in the 2003 Commissioner primary, Allan Domb and Derek Green in the 2019 primary, and Jim Kenney in the 2011 primary. The candidates who did disproportionately best in the red divisions were non-party challengers (who didn’t align with Dimension 2’s progressive candidates): Nelson Diaz for 2015 mayor, Joe Vodvarka for 2010 senate, and third-party mayoral candidates Boris Kindij and Osborne Hart in 2015.

A fascinating note: remember that the model doesn’t know anything about space. There is nothing built into the model that tries to say neighboring divisions should have similar scores. All of the spatial correlations in the scores are purely because those divisions vote similarly.

Changes over time

All of the above I’ve discussed before. But the thing that’s bothered me is that these boundaries have all clearly changed since 2002. The base for progressive challengers has expanded into the ring around Center City: University City, Fishtown and Kensington, East Passyunk. And the demographics of the city have changed: we have a strongly growing Hispanic population, and Black householders continue to grow in Philadelphia’s Middle Neighborhoods. If you naively applied the boundaries from the maps above to a 2020 election, you would miss important shifts on the edges. We can do better.

Consider candidates who did well in each dimension, but from early and late in the data. Here are maps of some candidates that had large scores in the first:

View code
map_keys <- function(key_df){
  dim_1_map <- df %>% 
    inner_join(key_df) %>%
    arrange(sign, time) %>%
    mutate(
      candidate_key = factor(candidate_key, levels=unique(candidate_key)),
      votes=votes+1,
      total_votes=total_votes+ncand,
      pvote=votes/total_votes
    )
  
  dim_1_map %<>% 
    group_by(candidate_key) %>%
    mutate(
      pvote_city = sum(votes) / sum(total_votes)
    )
  
  winsorize <- function(x, pct=0.95){
    cutoff <- quantile(abs(x), pct, na.rm = T)
    replace <- abs(x) > cutoff 
    x[replace] <- sign(x[replace]) * cutoff
    x
  }
  
  ggplot(divs %>% left_join(dim_1_map)) +
    geom_sf(aes(fill=winsorize(log10(pvote / pvote_city))), color=NA) +
    scale_fill_viridis_c("log(\n % of Vote /\n % of Vote in city\n)") +
    facet_wrap(
      ~candidate_key, 2, 2, 
      labeller=labeller(
        candidate_key = function(key){
          candidate <- gsub(".*_(.*)_.*_(.*)_(.*)", "\\1", key) %>% format_name
          year <- as.integer(gsub(".*_(.*)_.*_(.*)_(.*)", "\\2", key)) + MIN_YEAR
          election <- gsub(".*_(.*)_.*_(.*)_(.*)", "\\3", key) %>% format_name
          sprintf("%s, %s %s", candidate, year, election)
        }
      )
    ) +
    theme_map_sixtysix() %+replace% theme(legend.position="right")
}

map_keys(
  tribble(
    ~candidate_key, ~sign, ~time,
    "MAYOR_SAM KATZ_REPUBLICAN_1_general", -1, 0,
    "PRESIDENT OF THE UNITED STATES_DONALD J TRUMP_REPUBLICAN_14_general", -1, 1,
    "MAYOR_JOHN F STREET_DEMOCRATIC_1_general", 1, 0,
    "MAYOR_ANTHONY HARDY WILLIAMS_DEMOCRATIC_13_primary", 1, 1
  )
) +
  ggtitle("Candidates with extreme scores in Dimension 1")

Notice that the boundaries in some places changed, such as Street’s performance in University City versus Hardy Williams’.

Here’s Dimension 2:

View code
map_keys(
  tribble(
    ~candidate_key, ~sign, ~time,
    "COUNCIL AT LARGE_KENDRA BROOKS_WORKING FAMILIES PARTY_17_general", -1, 1,
    "COUNCIL AT LARGE_ANDREW TOY_DEMOCRATIC_5_primary", -1, 0,
    "MAYOR_ROBERT A BRADY_DEMOCRATIC_5_primary", 1, 0,
    "DISTRICT ATTORNEY_JOHN O NEILL_DEMOCRATIC_15_primary", 1, 1
  )
) +
  ggtitle("Candidates with extreme scores in Dimension 2")

The main takeaway from the maps is that the Center City progressive bloc has expanded outward. Toy did better in a core Center City region, whereas Brooks outperformed to the West, South, and North of where he did. It’s a little hard to see, but Brady also won much more of Fishtown and Kensington than O’Neill did.

And finally, Dimension 3:

View code
map_keys(
  tribble(
    ~candidate_key, ~sign, ~time,
    "MAYOR_NELSON DIAZ_DEMOCRATIC_13_primary", -1, 1,
    "COUNCIL AT LARGE_JUAN F RAMOS_DEMOCRATIC_1_primary", -1, 0,
    "COUNCIL AT LARGE_KATHERINE GILMORE RICHARDSON_DEMOCRATIC_17_primary", 1, 1,
    "MAYOR_MICHAEL NUTTER_DEMOCRATIC_5_primary", 1, 0
  )
) +
  ggtitle("Candidates with extreme scores in Dimension 3")

Notice that the Hispanic cluster has expanded into the Northeast.

Time-varying Blocs

Instead of the static model, consider a model where divisions’ scores are allowed to change over time. \[ \log(E[x_{ij}]) = \log(T_i) + \mu_j + (\alpha_i + \beta_i y_j)’DV_j \] where \(\alpha_i + \beta_i y_j\) is a linearly changing vector of division \(i\)’s scores by year \(y\). We’ve now allowed the embedding of the Divisions in University City, for example, to become more positive in the progressive Dimension 2 from 2002 to 2020.

Fitting this model isn’t as easy as SVD. I’ll use gradient descent to find a maximum likelihood solution, initialized with the time-invariant SVD solution. Since I’m now using likelihood, I’ll also assume a poisson distribution for \(x\). [One mathematical note: we lose the guarantee of SVD that the U- and V-vectors will be orthogonal. I’m not really worried about this, and am not convinced there are practical implications as long as we sufficiently normalize, but be forewarned.]

View code
VERBOSE <- TRUE
printv <- function(x, ...) if(VERBOSE) print(x, ...=...) 

U_df <- U_0
V_df <- V_0
D <- D_0

update_U <- function(df, D, V_df){
  form <- sprintf(
    "votes ~ -1 + %s",
    paste(sprintf("dv.%1$i + year:dv.%1$i", 1:K), collapse=" + ")
  )
  
  for(k in 1:K){
    var.k <- function(stem) sprintf("%s.%i", stem, k)
    V_df[[var.k("dv")]] <- D[k] * V_df[[var.k("score")]] 
  }
  
  U_new <- df %>% 
    mutate(votes=round(votes)) %>%
    left_join(
      V_df %>% select(candidate_key, mean_log_pvote, starts_with("dv.")), 
      by=c("candidate_key")
    ) %>%
    filter(total_votes > 0) %>%
    group_by(warddiv) %>%
    do(
      broom::tidy(
        glm(
          as.formula(form), 
          data = ., 
          family=poisson(link="log"),
          offset=log(total_votes) + mean_log_pvote
        )
      ) 
    ) %>% 
    ungroup()
  
  U_new %<>%
    mutate(
      term_clean=case_when(
        grepl("year", term) ~ gsub(".*dv\\.([0-9]+)(:.*|$)", "beta.\\1", term),
        TRUE ~ gsub("dv\\.([0-9]+)$", "alpha.\\1", term)
      )
    ) %>%
    select(warddiv, term_clean, estimate) %>%
    spread(key=term_clean, value=estimate)
  
  return(U_new)
}


update_V <- function(df, U_df, D){
  
  dfu <- df %>% 
    left_join(U_df,by="warddiv")
  
  for(k in 1:K){
    var.k <- function(stem) sprintf("%s.%i", stem, k)
    dfu[[var.k("du")]] <- D[k] * (dfu[[var.k("alpha")]] + dfu[[var.k("beta")]] * dfu$year)
  }
  
  form <- sprintf(
    "votes ~ 1 + %s",
    paste(sprintf("du.%1$i", 1:K), collapse=" + ")
  )
  
  V_new <-  dfu %>%
    mutate(votes=round(votes)) %>%
    filter(total_votes > 0) %>%
    group_by(candidate_key) %>%
    do(
      broom::tidy(
        glm(
          as.formula(form), 
          data = .,
          family=poisson(link="log"),
          offset=log(total_votes)
        )
      ) 
    ) %>% 
    mutate(
      term = case_when(
        grepl("^du", term) ~ gsub("^du", "score", term),
        term=="(Intercept)" ~ "mean_log_pvote"
      )
    ) %>%
    select(candidate_key, term, estimate) %>%
    spread(key=term, value=estimate)
  
  return(V_new)
}

scale_udv <- function(U_df, D, V_df){
  for(k in 1:K){
    var.k <- function(stem) paste0(stem, ".", k)
    
    sum_sq <- sum(V_df[[var.k("score")]]^2)
    D[k] <- D[k] * sqrt(sum_sq)
    V_df[[var.k("score")]] <- V_df[[var.k("score")]] / sqrt(sum_sq)
    
    u <- U_df[[var.k("alpha")]] + outer(U_df[[var.k("beta")]], 0:max(df$year))
    sum_sq <- sum(u^2)
    D[k] <- D[k] * sqrt(sum_sq)
    U_df[[var.k("alpha")]] <- U_df[[var.k("alpha")]] / sqrt(sum_sq)
    U_df[[var.k("beta")]] <- U_df[[var.k("beta")]] / sqrt(sum_sq)
  }
  
  return(list(U=U_df, D=D, V=V_df))
}


predict_score <- function(df, U_df, V_df, D){
  outer <- df %>%
    select(warddiv, candidate_key, year, votes, total_votes) %>%
    left_join(U_df, by="warddiv") %>%
    left_join(V_df, by=c("candidate_key"))
  
  vec <- 0
  for(k in 1:K){
    var.k <- function(var) paste0(var, ".", k)
    a <- outer[[var.k("alpha")]]
    b <- outer[[var.k("beta")]]
    u <- (a + b*outer$year)
    v <- outer[[var.k("score")]]
    vec <- vec +  u * v * D[k]
  }
  
  outer$udv <- vec
  outer$log_pred <- outer$mean_log_pvote + log(outer$total_votes) + outer$udv
  outer$pred <- exp(outer$log_pred)
  outer$resid <- outer$votes - outer$pred
  return(outer %>% select(candidate_key, warddiv, udv, votes, pred, log_pred, resid, year))
}

calc_ll <- function(pred){
  sum(
    dpois(round(pred$votes), pred$pred, log=TRUE)[df$total_votes > 0]
  )
}

pred_0 <- predict_score(df, U_0, V_0, D_0)
resids <- calc_ll(pred_0)

RUN <- FALSE
if(RUN){
  for(i in 1:100){
    U_df <- update_U(df, D, V_df)
    new_pred <- predict_score(df, U_df, V_df, D)
    # plot_compare_to_svd(new_pred)
    printv(
      sprintf("%i U: %0.6f", i, calc_ll(new_pred))
    )
    resids <- c(resids, calc_ll(new_pred))
    
    V_df <- update_V(df, U_df, D)
    new_pred <- predict_score(df, U_df, V_df, D)
    # plot_compare_to_svd(new_pred)
    printv(
      sprintf("%i V: %0.6f", i, calc_ll(new_pred))
    )
    resids <- c(resids, calc_ll(new_pred))
    
    ## Not necessary to model D, since V is always maximized. Instead, just rescale it.
    # D <- update_D(df, U_df, V_df)
    scaled <- scale_udv(U_df, D, V_df)
    U_df <- scaled$U
    D <- scaled$D
    V_df <- scaled$V
    
    new_pred <- predict_score(df, U_df, V_df, D)
    # plot_compare_to_svd(new_pred)
    printv(
      sprintf("%i D: %0.6f", i, calc_ll(new_pred))
    )
    if(abs(calc_ll(new_pred)-resids[length(resids)]) > 1e-8) 
      stop("D changed resids, it shouldn't.")
    plot(log10(diff(resids)), type="b")
  }
  
  res <- list(U=U_df, D=D, V=V_df)
  saveRDS(res, file=dated_stem("svd_time_res", "", "RDS"))
} else {
  res <- readRDS(max(list.files(pattern="svd_time_res")))
  U_df <- res$U
  V_df <- res$V
  D <- res$D
}
View code
## Just for fun, I figured I'd try the model in the new R torch package too :)

if(RUN){
  library(torch)  
  
  V_t <- torch_tensor(
    as.matrix(V_df %>% ungroup() %>% select(starts_with("score"))),
    requires_grad=TRUE
  )
  cand_means <- torch_tensor(V_df$mean_log_pvote, requires_grad=TRUE)
  
  alpha <- torch_tensor(
    as.matrix(U_df %>% select(starts_with("alpha"))), 
    requires_grad=TRUE
  )
  beta <- torch_tensor(
    as.matrix(U_df %>% select(starts_with("beta"))), 
    requires_grad=TRUE
  )

  year <- torch_tensor(t(t(df$year)), requires_grad=FALSE)
  
  # Don't let D change, since V will scale freely.
  D_t <- torch_tensor(D[1:K], requires_grad=FALSE)

  
  cands_i <- match(df$candidate_key, V_df$candidate_key)
  divs_i <- match(df$warddiv, U_df$warddiv)
  
  votes <- torch_tensor(df$votes, requires_grad=FALSE)
  log_total_votes <- torch_tensor(df$total_votes, requires_grad=FALSE)$log()
  
  valid_rows <- df$total_votes > 0
  
  create_dfs <- function(alpha, beta, D, V, cand_means){
      U_df <- data.frame(
        warddiv=U_df$warddiv,
        alpha=as_array(alpha),
        beta=as_array(beta)
      )
      
      V_df <- data.frame(
        candidate_key=V_df$candidate_key,
        score=as_array(V),
        mean_log_pvote=as_array(cand_means)
      )
      
      return(scale_udv(U_df, as_array(D), V_df))
  }
  
  cand_rows <- lapply(candidates$candidate_key, function(x) which(df$candidate_key == x))
  
  learning_rate <- 1e-4
  lls <- c()
  for (t in seq_len(5e3)) {
    alpha_i <- alpha[divs_i]
    beta_i <- beta[divs_i]
    cand_means_i <- cand_means[cands_i]
    V_i <- V_t[cands_i,]
    
    udv <- (alpha_i + beta_i$mul(year))$mul(D_t)$mul(V_i)$sum(2)
    log_pred <- cand_means_i$add(log_total_votes)$add(udv)

    loss <- nn_poisson_nll_loss()
    ll <- loss(
      log_pred[valid_rows],
      votes[valid_rows]
    )
    
    ll$backward()
    
    if (t %% 100 == 0 || t == 1){
      with_no_grad({
        lls <- c(lls, as.numeric(ll))
        cat("Step:", t, ":\n", as.numeric(ll), "\n")
        
        dfs <- create_dfs(alpha, beta, D_t, V_t, cand_means)
        new_pred <- predict_score(df, dfs$U, dfs$V, dfs$D)
        resids <- c(resids, calc_ll(new_pred))
        plot(log10(diff(resids)), type="b")
        cat(tail(resids, 1), "\n")
      })
    }
    
    if(is.na(as.numeric(ll))) stop("Bad ll")
    
    with_no_grad({
      V_t$sub_(learning_rate * V_t$grad)
      cand_means$sub_(learning_rate * cand_means$grad)
      alpha$sub_(learning_rate * alpha$grad)
      beta$sub_(learning_rate * beta$grad)
      
      # D$sub_(learning_rate * D$grad)
      
      V_t$grad$zero_()
      cand_means$grad$zero_()
      alpha$grad$zero_()
      beta$grad$zero_()
      # D$grad$zero_()
      
    })
  }
  
  res <- create_dfs(alpha, beta, D_t, V_t, cand_means)
  U_df <- res$U
  D <- res$D
  V_df <- res$V
  saveRDS(res, file=dated_stem("svd_time_res", "", "RDS"))
}

The results show how the dimensions have changed over time.

View code
# V_df %>% 
#   # filter(grepl("_primary", candidate_key)) %>%
#   arrange(-score.3)

map_u(U_df, D, c(2002, 2020)) + ggtitle("Time-Varying Results")

The first dimension, which I said captures Black-White divides (or, similarly, Democratic-Republican), shows that the blue dimension has expanded in the Northwest and in Overbrook/Wynnefield, while the red dimension have expanded outward from its dense Center City core, and lost ground in the lower Northeast. John Street did disproportionately well in the blue divisions in 2003, while Tariq El-Shabazz did disproportionately well in 2017. Meanwhile, the candidates who did disproportionately best in the red dimension are Republicans: John McCain, Mitt Romney, Sam Katz.

The second dimension, for which I said blue captures progressive candidates, has expanded outward even more from Center City, now covering much of Fishtown and Kensington, upper South Philly, and into Brewerytown. Meanwhile, the wealthy progressive base in Wynnefield and Overbrook is gone, having been replaced by the strong-Democrat dimension 1.

The third dimension is hard to figure out. In 2002, It was strongly red in Hispanic North Philly, deep South Philly, and Overbrook. By 2020, it’s broadly red in Hispanic North Philly up into the lower Northeast. Meanwhile, the blue divisions include the Northeast and the Northwest. The General election candidates who do disproportionately well in the red divisions are third parties–Osborne Hart and John Staggs in 2015, Neal Gale in 2018–and the candidates who do well in the Democratic Primary typically have Hispanic surnames–Nelson Diaz in 2015, Humberto Perez in 2011, Deja Lynn Alvarez in 2019. Remember, this is all after controlling for the stronger dimensions 1 and 2, and is not a terribly influential dimension.

Changing Voting Blocs

The Voting Blocs themselves were a discretized version of these continuous scores into four categories.

In previous iterations, I hand-curated the Voting Blocs by choosing cutoffs for the categories. Now, since we have different scores across different years, I’ll try to automate it. I’ll use simple K-means clustering on the scores.

View code
years <- c(2002, 2020)

mutate_add_score <- function(U_df, D, year, min_year=MIN_YEAR){
  year_dm <- year - min_year
  for(k in 1:K){
    var.k <- function(x) sprintf("%s.%i", x, k)
    U_df[[var.k("score")]] <- D[k] * (
      U_df[[var.k("alpha")]] + U_df[[var.k("beta")]] * year_dm
    )
  }
  return(U_df)
}

div_cats <- purrr::map(
  c(2002, 2020), 
  function(y) mutate_add_score(U_df, D, as.integer(y))
) %>%
  bind_rows(.id = "id") %>%
  mutate(year = c(2002, 2020)[as.integer(id)])

# plot(div_cats %>% select(starts_with("score")))
  
km <- kmeans(
  div_cats[, c("score.1","score.2","score.3")], 
  centers=matrix(
    10 * c(
      1, -1, 0, 
      -1, 1, 0, 
      0, -1, -1, 
      -1, -1, 1
    ), 
    4, 3, 
    byrow=T
  )
)

cats <- c(
  "Black Voters",
  "Wealthy Progressives",
  "Hispanic Voters",
  "White Moderates"
)
div_cats$cluster <- factor(cats[km$cluster], levels=cats)
cat_colors <- c(light_blue, light_red, light_orange, light_green)
names(cat_colors) <- cats  

plot(div_cats %>% select(starts_with("score")), col=cat_colors[km$cluster])

The three score dimensions are chopped into four groups. Bloc 1 (Blue), has positive scores in Dimension 1. These are the Black Voter divisions. Bloc 2 (Red) has middling scores in Dimension 1 but positive scores in Dimension 2. These are the Wealthy Progressive divisions. Bloc 3 has middling scores in Dimension 1, negative scores in Dimension 2, and negative scores in Dimension 3. These are the Hispanic Voter divisions. (I previously called these Hispanic North Philly, but once we allow for time, it turns out that some South Philly divisions in 2002 were also in the group). Bloc 4 has negative scores in Dimension 1 and negative scores in Dimension 2. These are the White Moderate divisions.

View code
ggplot(divs %>% left_join(div_cats)) +
  geom_sf(aes(fill=cluster), color=NA) +
  scale_fill_manual(NULL, values=cat_colors) +
  facet_wrap(~year) +
  theme_map_sixtysix() %+replace%
  theme(legend.position="bottom", legend.direction="horizontal") +
  ggtitle("Voting Blocs over time")

In the maps above, you can clearly see the expansion of the Wealthy Progressive divisions outward from Center City, and growth of the Black Voter divisions in North and West Philly, along with a shift in the Hispanic Voter divisions eastward and up into the lower Northeast.

With the moving boundaries, the changes in Blocs’ share of the vote is even starker than before.

View code
mutate_add_cat <- function(U_df, D, year, km, min_year=MIN_YEAR){
  U_score <- mutate_add_score(U_df, D, year, min_year)
   
  cluster <- apply(
    as.matrix(km$centers),
    1,
    function(center) {
      apply(
        as.matrix(U_score %>% select(starts_with("score"))),
        1,
        function(row) sum((row - center)^2)
      )
    }
  )
  
  cat <- cats[apply(cluster, 1, which.min)]
  
  return(U_score %>% mutate(cat=cat))
}

div_cats <- purrr::map(
  2002:2020, 
  function(y) mutate_add_cat(U_df, D, as.integer(y), km)
) %>% bind_rows(.id = "id") %>% mutate(year=as.integer(id)-1)

turnout_df <- df %>% 
  filter(is_topline_office) %>%
  group_by(year, election_type, warddiv) %>%
  summarise(turnout=sum(votes), .groups="drop") %>%
  left_join(div_cats) %>%
  group_by(year, election_type) %>%
  do(
    mutate_add_cat(U_df=., D=D, year=.$year + MIN_YEAR, km=km)
  )

turnout_cat <- turnout_df %>% 
  group_by(year, election_type, cat) %>%
  summarise(turnout=sum(turnout)) %>%
  group_by(year, election_type) %>%
  mutate(prop=turnout/sum(turnout))

ggplot(
  turnout_cat, 
  aes(x=year+MIN_YEAR, y=100*prop, color=cat)
) +
  geom_line(aes(linetype=election_type), size=2)+
  geom_text(
    data=tribble(
      ~prop, ~cat,
      0.48, "Black Voters",
      0.35, "Wealthy Progressives",
      0.21, "White Moderates",
      0.06, "Hispanic Voters"
    ),
    aes(label=cat),
    fontface="bold",
    x=2015.5,
    hjust=0
  ) +
  scale_color_manual(values=cat_colors, guide=FALSE) +
  theme_sixtysix() +
  expand_limits(y=0, x=2021) +
  labs(
    title="Voting Blocs' proportions of turnout",
    subtitle="Grouped by changing blocs",
    y="Percent of Turnout",
    x=NULL,
    linetype=NULL
  )

Black Voters have seen an increasing share of the turnout since 2002, though that’s somewhat mitigated by changes since 2016. Wealthy Progressive share took a clear leap in 2017 and after. White Moderate and Hispanic Voter shares have seen a steady decline since 2002. Notice that this is not directly applicable to people; this is all traits of divisions. For example, if the Hispanic population is becoming more dispersed across the city, or voting more similarly to the other Voting Blocs, they may represent a steadier share of the electorate even while Divisions clearly identifiable as Hispanic Voters are sparser. This is an instance of what is known as Ecological Inference.

Next Steps

I’ll be adapting all of my tooling: the Turnout Tracker, Election Needle, and the Voting Blocs, to use these time-varying dimensions instead. To come!

What broke the Turnout Tracker?

Philadelphia’s turnout on November 3rd was disappointing, but it was far from the bloodbath that the Turnout Tracker was predicting.

At the end of Election Day, I was estimating 285,000 in-person votes, with a CI of (262K, 303K). The actual number was 360K. What went wrong?

View code
knitr::opts_chunk$set(echo = FALSE, warning = FALSE, message=FALSE)

# setwd("C:/Users/Jonathan Tannen/Dropbox/sixty_six/posts/turnout_tracker/tracker_v0/")
library(dplyr)
library(stargazer)
source("config.R", chdir=TRUE)
source("../../R/util.R", chdir=TRUE)
source("../../R/generate_plots.R", chdir=TRUE)
source("../../R/bootstrap.R", chdir=TRUE)

config <- extend_config(config)

params <- readRDS("outputs/params.Rds")
bs <- readRDS("outputs/bootstrap.Rds")

get_ward <- config$get_ward_from_precinct

raw_data <- readRDS("outputs/raw_data.Rds") %>% 
  mutate(
    ward=get_ward(precinct),
    time_of_day=config$base_time + minutes(minute)
  )

current_time <- max(bs@raw_result@time_df$time_of_day)

turnout_df <- get_ci_from_bs(bs, predict_topline, keys="time_of_day")
current_turnout <- filter_to_eod(turnout_df)

ward_turnout <- get_ci_from_bs(
  bs, 
  predict_ward_turnout, 
  get_ward=get_ward,
  keys="ward"
)
precinct_turnout <- get_ci_from_bs(
  bs, 
  predict_precinct_eod,
  keys="precinct"
)

ci_df <- get_ci_from_bs(bs, predict_topline, keys="time_of_day", eod=FALSE)
  
winsorize <- function(x, t = 0.95){
  mean_x <- mean(x, na.rm=TRUE)
  x_demean <- x - mean_x
  cutoff <- quantile(abs(x_demean), probs=t, na.rm=TRUE)
  return(
    mean_x + sign(x_demean) * pmin(abs(x_demean), cutoff)
  )
}
  
resid_data <- raw_data %>%
  mutate(
    pred = predict_turnout(
      bs@raw_result, 
      precinct=precinct, 
      time_of_day=time_of_day
    )$turnout,
    resid = winsorize(log_obs - log(pred))
  ) %>% 
  left_join(ci_df)
  
ggplot(
  ci_df,
  aes(x=time_of_day, y=turnout)
) +
  geom_point(data=resid_data, aes(y=turnout * exp(resid))) +
  geom_ribbon(
    aes(ymin = p025, ymax = pmin(p975, 1.5e6)),
    alpha = 0.2,
    color = NA,
    fill = strong_purple
  ) +
  geom_line(size = 2, color = strong_purple) +
  scale_x_datetime("", date_labels = "%I", date_breaks = '1 hour') +
  scale_y_continuous("", labels = scales::comma) +
  geom_hline(yintercept = 360e3) + 
  geom_text(
    data = data.frame( 
      turnout = 360e3,
      time_of_day = rep(config$base_time + minutes(30), 1),
      label = "Actual Turnout = 360K"
    ),
    aes(label=label, y=turnout),
    vjust = 1.2,
    hjust = 0
  ) +
  expand_limits(x = config$election_day + hours(config$end_hour), y=0) + 
  ggtitle("Estimated In-Person Election Turnout") +
  theme_sixtysix() 

The miss was not the same across the city.

This is bad. I underpredicted the whole of the Northeast, plus North, West, and South Philly. I also ended up over-predicting Center City and Chestnut Hill (and way over-predicting Penn’s 27th).

The patterns here map clearly to the city’s Voting Blocs, but it’s important to make clear that the model already accounts for historic correlations among Voting Blocs. In fact, here’s the same map from the 2019 primary.

There’s much less of a pattern, and the model handled all of the correlations pretty well. It underpredicted Hispanic North Philly, where there was a competitive 7th District council race, but overall the true turnout was well within the CI, and we missed getting it on the nose by only 14K votes.

So yes, something went wrong this year, and yes, it’s correlated with the Voting Blocs. But it’s not as simple as failing to account for correlations. Instead, Covid broke the historic patterns.

What the Tracker does and doesn’t do.

First, some background.

The Turnout Tracker takes submissions from voters across the city. Participants give me (a) their division, (b) the time of day, and (c) their “voter number” in their division: how many people have voted before them, plus them. The result is I can estimate the cumulative distribution of votes for each division, and the total number of votes cast so far across the city.

Doing that well requires some hefty statistical work. I use historic correlations among divisions to predict the votes in divisions without any submissions, and estimate a non-parametric time distribution (the curvy line above) on the fly. And I bootstrap the whole thing to get confidence intervals. (Math person? See the Appendix, and then the github repo, for the math.)

A common concern I get about the Tracker is “what if you don’t get many submissions from a ward?” People are concerned that if I don’t get any submissions from the 66th, for example, I’ll treat that as if zero people there voted. Or maybe just assume the 66th is the same as the places where I do have submissions. But I don’t. I use the historic correlations to effectively take a weighted average among the submissions of the wards that historically have been similar. Having submissions from the ward itself will make me more confident in the estimate, but ward estimates should not be biased just because we don’t have submissions. As a toy example, suppose the city had two wards, which historically showed no correlations. If all of the submissions were from Ward A, then that would have no effect on the estimate for Ward B (they’re uncorrelated): the tracker would simulate Ward B as having the entire range of historic turnouts it’s ever had. The error bars would be huge. As we got submissions from Ward B, the estimate would narrow down on a portion of that range, becoming more confident. In the real Tracker, each Ward is correlated with other Wards at some value between -1 and 1.

For example, in predicting the Northeast’s 66th Ward relative to the city as a whole, here is the weighting I give to submissions from each other ward:

Notice that the 66th Ward depends mostly on other Northeast and South Philly wards, followed by the River Wards and Manayunk, then Center City West. In fact, conditional on the overall city turnout, it usually swings in the opposite direction of North and West Philly. (Though it’s worth pointing out that the “overall city turnout” uses all wards, so high numbers in North Philly may increase the 66th’s estimate by increasing the total estimate.)

The key is that the Tracker will not be broken by disproportionate submissions, or by large swings of turnout among Philadelphia’s Voting Blocs that are consistent with historic swings. Instead, what breaks the Tracker is when an entirely new pattern emerges, or a really big one, that we haven’t seen in the data back to 2002. And on Tuesday, November 3rd, that’s what happened.

How I handled mail-in ballots

This was the first year with no-excuse mail-in voting, and Covid meant that we would have enormous usage of it. Ahead of the election, I needed to figure out how to account for that.

The patterns of requests seemed to break down along familiar lines: the progressive wards of Center City and the ring around it requested ballots at high rates, while Black wards of West and North Philly did so somewhat less, and the Trumpy wards of South Philly and the Northeast less still. The pattern was familiar, and mapped to the Voting Blocs almost perfectly.

So I figured that once we subtract out the mail-in votes, the remaining in-person votes would look a lot like a typical election. Maybe what would happen is the Wealthy Progressive wards would swing towards low turnout, and everywhere else high, but those correlations would be correctly captured by the model. I decided to treat Election Day in-person turnout as any other election, ignoring the mail-in votes. Post-hoc, I added back the mail-in votes to get an accurate picture of the true topline.

What I decided not to do is parametrize model with mail-in votes to explicitly adjust the predictions (e.g. expecting places with low mail-in requests to vote in-person at much higher rates), or allow for different-than-normal covariances. But when you just pretend in-person votes were all that there was, this election was unlike any we’ve ever seen.

A jarring example is comparing the 66th Ward in the Northeast, from which I had no submissions, to the 8th Ward in Center City, from which I had a ton.

Typically, the 66th Ward casts about the same number of votes as the 8th Ward. Its high-water mark was in 2003, when it had 57% more votes than the 8th. In every year since 2016, it’s cast fewer.

So the Tracker expected the 66th Ward’s turnout to be somewhere in this range. I figured the 66th would make up for some of its mail-in lag, and we’d see in-person turnout at maybe 1.5 times the 8th. In other words, we’d see an extreme but historically-plausible proportion.

Here’s what happened:

View code
ggplot(
  turnout_df %>% filter(ward %in% c("08", target_ward)) %>%
    mutate(ward=ifelse(ward==target_ward, "target", ward)) %>%
    pivot_wider(names_from=ward, values_from=turnout, names_prefix = "t_") %>%
    bind_rows(
      turnout_20 %>%
        filter(ward %in% c(target_ward, "08")) %>%
        mutate(ward=ifelse(ward==target_ward, "target", ward)) %>%
        select(ward, inperson) %>%
        pivot_wider(names_from="ward", values_from="inperson", names_prefix = "t_") %>%
        mutate(year = "2020", election_type="general")
    ),
  aes(x=year, group=election_type, y=t_target / t_08)
) + 
  geom_line(aes(linetype=election_type), color=strong_blue) +
  geom_point(size=4, aes(color=(year == 2020 & election_type=="general"))) +
  scale_color_manual(values=c(`FALSE`=strong_blue, `TRUE`=strong_red), guide=FALSE) +
  # geom_histogram(binwidth = 0.1, boundary=0) +
  # geom_vline(
  #   xintercept=ward_turnout %>% 
  #       filter(ward %in% c("45", "08")) %>% 
  #       select(ward, turnout_20) %>% 
  #       pivot_wider(names_from="ward", values_from="turnout_20") %>%
  #       with(`45`/`08`),
  #   linetype="dashed"
  # ) +
  theme_sixtysix() +
  labs(
    title=sprintf("Distribution of %sth Ward turnout vs 8th", target_ward),
    subtitle="Elections from 2002 to the 2020 general (in red).",
    y=sprintf("Ward %s Turnout / Ward 8 Turnout", target_ward),
    x=NULL,
    linetype="Election"
  )

I completely underestimated the amount of catch-up that would happen on Election Day. The 66th Ward actually had 2.4 times the in-person votes of the 8th, a value that would seem impossible based on historic data. My assumption that in-person votes would look like maybe 2003 was wrong.

Obviously this didn’t just happen in the 66th and 8th. A similar plot exists for all of the errors in the maps above.

The result is that the Tracker vastly underpredicted the Northeast, expecting it to be more like Center City than it was (and overpredicted Center City and the universities).

Where to go frome here

Mail-in voting is here to stay, though hopefully Covid isn’t. What should be fixed for next elections? There are two possible strategies:

  1. Parametrize the model for mail-in requests. Allow the Tracker to adjust the covariances for the mail-ins requested, and expect a positive amount of catch-up in the low-requesting wards.

  2. Don’t overcorrect. This was probably an outlier election, thanks to Covid. Plus, when I retrain the model in May, I’ll have this election in the training set, so its priors should sufficiently allow for this trend. Finally, in future elections without Trump on the ballot, mail-ins will probably be less partisan. All of this suggests future elections should be relatively safe from this pandemic outlier.

I need to think about this, but I’ll probably choose a mix of these two strategies, and test the heck out of the new version for cases where mail-ins go berserk.

Plus, maybe I’ll finally get my act together and sufficiently recruit submissions from all wards in the city.

Appendix: The math

Suppose we have \(N_{obs}\) submissions for division voter counts. The turnout tracker models turnout response \(x_i\) on the log-scale, as \[ \log(x_{i}) = \alpha + \gamma_{d_i} + f(t_i) + \epsilon_i \] where \(\alpha\) is a fixed effect that scales the annual turnout, \(\gamma_d\) is an \(N_{div}\)-length vector of division-level random effects, with means and covariance that I’ve estimated on historic data, \(f(t)\) is a time-trend that goes from \(e^{f(0)} = 0\) at the start of the day to \(e^{f(t_{max})} = 1\) at the end (clearly \(f(0)\) is undefined, but we can get around this by only starting with the first datapoint), and \(\epsilon\) is noise.

The \(\gamma\) vector of division random effects are modeled as \[ \gamma \sim N(\mu, \Sigma) \] where \(\mu\) and \(\Sigma\) are estimated based on historic data of all Philadelphia elections since 2002.

The model simultaneously estimates \(\alpha\), \(f\), and the expectation of \(\gamma\) conditional on \(x\).

Suppose we know \(f(t)\). Define the residual as \(r_i = log(x_i) – f(t_i)\). Then the \(r_i\) are drawn from a normal \[ r_i \sim N(\alpha + \gamma_{d_i}, \sigma_\epsilon) \] Marginalizing out \(\gamma\), the covariance of \(r_i\), \(r_j\), \(i\neq j\), is \(\Sigma_{d_i, d_j}\), so the vector of \(r\) is drawn from a big multivariate normal, \[ r \sim N(\alpha + D\mu, D \Sigma D’ + Diag(\sigma_\epsilon)) \] where \(D\) is a \(N_{obs} \times N_{div}\) matrix with \(D_{ij} = 1\) if observation \(i\) belongs to division \(j\), 0 otherwise.

The log likelihood of \(r\) is \[ L(r; \alpha) = -\frac{1}{2} (r – \alpha – D \mu)’ (D \Sigma D’ + Diag(\sigma_\epsilon))^{-1} (r – \alpha – D \mu) + C \] and is maximized for an alpha satisfying \[ 0 = (r – \alpha_{MLE} – D\mu)'(D \Sigma D’ + Diag(\sigma_\epsilon))^{-1}1_{N_{obs}} \\ \alpha_{MLE} = \frac{(r – D\mu)'(D \Sigma D’ + Diag(\sigma_\epsilon))^{-1}1_{N_{obs}}}{1_{N_{obs}}’ (D \Sigma D’ + Diag(\sigma_\epsilon))^{-1}1_{N_{obs}}} \] To keep ourselves sane, we can write this as \[ \alpha_{MLE} = (r – D\mu)’ w \] where \(w\) is the \(N_obs\)-length weight-vector defined above. The key to the above formula is that observations from covarying divisions are discounted, so for example if we see two observations from divisions we know vote the same, they each get only half the weight.

Now consider \(\gamma\). Returning to the non-marginalized distribution, and plugging in \(\alpha_{MLE}\), the log-likelihood of \(r\) is \[ L(r; \gamma) = -\frac{1}{2 \sigma_\epsilon^2} (r – \alpha_{MLE} – D\gamma)'(r – \alpha_{MLE} – D\gamma) – \frac{1}{2}(\gamma – \mu)’\Sigma^{-1}(\gamma – \mu) + C \] Optimizing for \(\gamma_{MLE}\) gives \[ 0 = \frac{1}{\sigma_\epsilon^2} D'(r – \alpha_{MLE} – D\gamma_{MLE}) – \Sigma^{-1}(\gamma_{MLE} – \mu) \\ 0 = \frac{1}{\sigma_\epsilon^2} D'(r – \alpha_{MLE} – D(\gamma_{MLE}- \mu + \mu)) – \Sigma^{-1}(\gamma_{MLE} – \mu) \\ \left(\frac{D’D}{\sigma_\epsilon^2} + \Sigma^{-1}\right)(\gamma_{MLE} – \mu) = \frac{D'(r – \alpha_{MLE} – D\mu)}{\sigma_\epsilon^2} \\ \gamma_{MLE} – \mu = \left(\frac{D’D}{\sigma_\epsilon^2} + \Sigma^{-1}\right)^{-1} \frac{D'(r – \alpha_{MLE} – D\mu)}{\sigma_\epsilon^2} \] Note that \(D’D\) is just a diagonal matrix where the diagonal is the number of observations belonging to that division.

This is just a shrunk, weighted average of the deviations \(r\) from the means \(\alpha + D\mu\). Remember that \(D\) just maps observations to divisions, and \(D’D\) is just a diagonal with the number of observations to each division. So suppose we saw one observation from each division. The relative contributions to the random effects would be given by \((I + \sigma_\epsilon^2 \Sigma^{-1})^{-1}\) times each observation’s deviance from its mean. (This is what I map above.)