Larry Hosken. Technical writer. Puzzlehunt enthusiast.
120 stories
·
4 followers

As Mayor, Kim Would Try to Expand Inclusionary Housing Citywide

1 Share

Second in a series analyzing the mayoral candidates’ records and pledges on housing and homelessness.

In a Dickensian touch, Jane Kim’s District 6 is home to the city’s wealthiest and poorest ZIP codes.

This swath of the city, which includes the Tenderloin, SoMa and Treasure Island, is the eye of San Francisco’s affordability storm. High-rise condo and office towers are mushrooming. The vast majority of recent housing has risen there: 25,658 units were built citywide between 2007 and 2016, with 15,541 of them in District 6. It is the beating heart of the city’s ascendant tech economy.

And through it all, the vast majority of single-room occupancy hotels and supportive housing beds are there, too. District 6 is also the beating heart of this city’s marginalized communities.

Kim, an ambitious politician who in 2016 fell short in her battle with Scott Wiener for Mark Leno’s termed-out state Senate seat, has played an active role in the transformation of her district — and, by virtue of its outsize role in city housing and homeless issues, the city writ large.

She has focused much of her political energy on inclusionary housing — city programs mandating a percentage of apartments in new developments be set aside at below-market rates. She bandied about the catch-phrase “40 is the new 30” after extracting not 30, but 40 percent affordability ratios on several high-profile mega-projects, including the San Francisco Giants’ Mission Rock waterfront development and the 5M mixed-use towers on land owned by the Hearst Corp. That set a new standard. A major component of Kim’s brand is her ability as a dealmaker and proven success in landing gaudy affordability ratios in major projects — a claim none of her rivals can make.

Can District Dealmaking Lead to Citywide Policy?

But a key question is whether Kim’s ability to wring concessions out of the biggest developers will translate into a coherent housing policy on a citywide level. Much of Kim’s work on the Board of Supervisors and in her district has been about maximizing developers’ contributions to affordable housing and neighborhood projects. But the Giants and other large developers enabling her “40 is the new 30” mantra can acquiesce to city affordability demands that lesser entities cannot.

Away from the glittering towers of SoMa, Kim is calling for an easing of rules governing the building of additional dwelling units in homes — in-law apartments, essentially. “One of my staffers built one, and it cost $200,000,” she said, incredulous at the high price tag. Pare that number back, she said, and the city could add some 40,000 rent-controlled units just like that. (Kim credits Wiener for legislation in this area.)

Kim would also like to reform the process of financing private infrastructure projects. Developers’ inability to pay for such work is what keeps tens of thousands of approved units in the pipeline instead of actually being built. In a more splashy move, she proposed a $1 billion affordable housing bond — a suggestion she dropped, out of left field, at a board committee hearing earlier this year — but doesn’t foresee it going before voters before 2020.

Nuances of Housing Policy

The old political saw is that “when you’re explaining, you’re losing.” And squaring several of Kim’s positions regarding where to build housing, and how much, requires a great deal of explaining.

At an April City Hall rally, Kim crowned herself “the queen of density and upzoning in District 6.” But, one month earlier, she struck a different tone during a rally held in cozy District 7 at West Portal Station. There, Kim launched fusillades against the increased height limits and density that would have been allowed under SB 827, the failed state Senate legislation by her bête noire, Wiener. She described it as a sop to developers, who would not have been required to build a higher percentage of affordable housing or offset the infrastructure and transit pressures brought about by taller, denser communities.

In Kim’s mind, enabling taller, denser buildings in District 6 and calling out attempts to do so in District 7 are not incongruous. “I didn’t say I wouldn’t upzone the Westside,” she said, grinning, during an interview afterward. “I did say SB 827 was the wrong way to upzone.”

In Kim’s mind, it’s a giveaway to developers to permit taller buildings than current zoning rules would allow without extracting additional monies and concessions. “I have been consistent,” she insisted. “If I do upzone the Westside, it’d be through the local planning process, like the Central SoMa Plan.”

But Kim has come under fire for supporting a plan, which, in its current iteration, would add some 40,000 new jobs to her district but only 7,000 housing units. Critics bemoan her attempts to curtail housing density on the Westside while simultaneously complaining that housing needs to be built somewhere other than SoMa. (The Planning Department is set to approve the plan Thursday, May 10, then send it to the Board of Supervisors for final approval.)

Central SoMa Plan ‘Not Going to Stay the Same’

“The Central SoMa plan is not going to stay the same,” she said matter-of-factly. “This is the Planning Department’s proposal. I put my name on it because it’s my district. Mayor Mark Farrell put his name on it. I don’t know how much he knows about Central SoMa.”

By affixing her name to the plan, Kim said she has a greater ability to alter it. An environmental impact report will study the feasibility of adding 1,600 units, but that’s still far short of a healthy jobs-housing balance. “I don’t think it’s fair to talk jobs-housing balance in one area plan. We have to look citywide,” she said. “We’re not building offices on the Westside.”

Kim said she hopes to raise the heights on eight or nine SoMa parcels and build more market-rate and affordable housing. “In everything I do, conferring more density and height on a parcel has to come with a higher percentage of affordable housing,” she said.

Finally, adding housing means little to those unable to keep the housing they’ve got, which is why Kim sponsored “Eviction Protections 2.0,” legislation the Board of Supervisors passed in 2015. This ordinance took aim at what Kim calls “nuisance evictions” and what tenants’ rights advocates label “sham evictions”: leaving a stroller in the hallway, hanging laundry off the fire escape, adding a new roommate or caretaker. Kim takes credit for being the first mayoral candidate to embrace this year’s Proposition F, the “City-Funded Legal Representation for Residential Tenants in Eviction Lawsuits” measure on the June 5 ballot. (See “Proposition F: Free Legal Aid for Tenants Facing Eviction”)

Health and Homelessness Solutions

When Kim served as acting mayor a few years ago, she decided to make a splash by spending a night in a homeless shelter. It turned out to be a seminal moment for her approach to homelessness. Kim, who is now 40, said she was, far and away, the youngest person there. “This shelter was built for someone like me,” she said. “Young and able-bodied, but down on my luck.”

But that describes an increasingly smaller portion of the city’s homeless population. People living on San Francisco streets today are older and sicker, a trend that is growing.

The overarching goal of a Mayor Kim would be to treat homelessness more like a public health crisis than an economic problem, carving out a more primary role for the Department of Public Health and a lesser one for the Human Services Agency. “HSA has been very effective addressing homelessness and poverty with people whose barrier is a job or some kind of economic need,” she said. “But they are not as effective for people who have, on top of that, mental health issues or other illnesses affecting them.”

That was Kim’s thinking as supervisor when she advocated for inclusion of nurses at adult shelters, pushed for a doubling the number of medical respite beds in SoMa and supported full medical and mental health surveys being performed at all shelters (and the county jail) to better understand who we’re serving (and locking up). While the city’s homeless numbers have been remarkably consistent over the decades, just who is on the streets is changing. Kim said this calls for a change in strategy.

Because it’s costly to house and treat the old and chronically ill, new funds — lots — are needed. Building or securing housing has become prohibitively expensive, and obtaining SRO hotel rooms for the needy costs more than twice as much now than it did a decade ago. “We’re in the same market as everyone else,” Kim noted.

On top of all that, Kim hopes to expand mental health services and medical treatments tying into the opioid epidemic. This would mean upping the number of Department of Public Health workers ministering to the homeless. And, while she’s at it, she wants to increase the number of street cleaners. She acknowledges this is going to cost quite a bit more money, and will require additional revenues (read: fees and taxes). And, sans help from the state and federal government, homelessness is not getting “solved.” Period.

“San Francisco can never resolve the homeless crisis on its own,” Kim said. “But we can make a dent. We are a wealthy city. We need to generate new revenue. And all the tax cuts Trump put into effect, we should be recapturing.”



Read the whole story
lahosken
10 days ago
reply
San Francisco, USA
Share this story
Delete

The Democratic Party has a Messaging Problem

1 Share

When the Affordable Care Act (aka ObamaCare) was signed into law, on March 23, 2010, Democrats had ostensibly mastered a difficult task – pass a massive domestic spending bill, pushing the bounds of progressive legislation, that was at odds with public opinion. It was unpopular with Republican voters who chaffed at the Essential Benefits requirement that any health insurance needed to have, but it was also unpopular with many Democratic voters who felt it did not go far enough by failing to provide a public health insurance option. RealClearPolitics, the polling aggregator, tracked support for the Bill at only 39.7%, as opposed to 50.4 % opposition, just one day before passage of the Bill.  From 2010 to late 2016, Americans consistently opposed the Bill with a 10 percentage point margin.

Then, in January 2017, for the first time on record after ObamaCare passed, RealClearPolitics showed support for ObamaCare as higher than opposition – a curious development given that the underlying law did not change. Of course, the context had, and talk of repeal was omnipresent in the political discourse, making the threat of losing newly gained healthcare insurance a very real prospect for American citizens. We have a related, but different take: The Democratic party has a serious messaging problem. ObamaCare, described as vastly unpopular by many, was in fact just the opposite: extremely popular with Americans. It is just that voters and legislators did not know. How did that happen?

First, we should note that part of the bad polling numbers was due to bad polling and misreading of the polls. By framing ObamaCare in starkly partisan terms, pollsters managed to categorize many Americans as opposing the law even though they were in support of everything it did: more government subsidies, more comprehensive insurance, and direct coverage through the Medicaid expansion. For example, we found strong support for major components when we polled Americans on ObamaCare on November 28, 2016, including majority support for the Medicaid expansion, banning healthcare providers from denying coverage, a government-created marketplace for insurance, and requiring healthcare providers to keep children on the insurance, with the lone exception of the individual mandate (Figure 1). At the same time, we registered the typical 40 percent support for ObamaCare (or Affordable Care Act, no matter how we phrased it).

Figure1_Healthcare

Figure 1

The topline polling evolved when Americans started to learn what these Essential Benefits were and began to grasp that the scope and impact of the Medicaid expansion, shifting their opinion on the ACA from hated government regulation to necessary coverage and benefits. Until Congress began debating taking away ObamaCare, Americans had little idea what ObamaCare did beyond regulation of markets. The Democratic party had completely failed to communicate the benefits of its policies, making the polling for the bill way less popular with both Republicans and Democrats than the actual bill was.

Believing the polling constrained Democratic legislators: it is conceivable that Democrats, would they have been able to message the content of earlier versions of progressive healthcare initiatives, could have passed a much more progressive bill without suffering some of the last-minute bruises the Affordable Care Act endured. In effect, in believing these poll results Democrats neutered their own bill, thinking it was at the bounds of acceptable progressive legislation, rather than to the right of the voting population.

This failure of communication does not stop at ObamaCare. A vast part of the population supports core Democratic policies, especially on the economic dimension, but does not claim to support the “Democratic policy”. Vast quantities of voters love the Democratic policy, but do not know it is the Democratic policy. If public polling continues to concentrate on documenting which parties’ policies Americans prefer, but neglects to poll support for the content of the policy, we have a problem: A citizenship that appears to be much more conservative than it actually is, leading to more conservative policies from representatives who think they are responding to their constituency. Indeed, this could be the driver behind the recently documented phenomenon that most law-makers estimate the preferences of their constituents as more conservative than they actually are.

We document this disconnect between the level of stated support for the content of Democratic policies (generally pretty high) and stated support for the Democratic party’s plan (generally close to parity with Republicans) on a number of issues. Stripped of any partisan frame and with the content of the policy clearly defined, there is overwhelming support for the Democratic policy position on healthcare (post-ObamaCare) and overwhelming disapproval for the Republican policies on healthcare. But, only a slight plurality of voters claims they support the Democrats’ plan over the Republicans’. “Do you support allowing any American to buy Medicare, if they choose to?” is supported by 82 percent of voters, including 80 percent of Republicans.  “Should the US allow healthcare insurance to be sold that does NOT cover pre-existing conditions, maternity care, and mental health?” is supported by 36 percent of the voting population, failing to even get 40 percent of Republicans. This is the policy backed by Republican leadership in Congress and coming to fruition in several states. When we simply ask respondents which party’s healthcare policies they prefer, we get a much more muted picture. Yes, a plurality of voters supports the Democratic plan, but support is almost at parity, with 41 percent supporting the Democratic plan and 39 percent supporting the Republican plan –  despite the fact that the same respondents had overwhelmingly voiced their support for the actual policies of the Democrats.

Medicare buy-in is the policy supported by many Democratic senators and should be a core policy topic in 2018. Our polling documents the fruitful grounds this approach falls on, at a time when the DCCC, the organization to elect Democrats to Congress, urges candidates to take more restrained stances on healthcare. The DCCC is reading the same misleading polls that urged constraint in the formation of ObamaCare and meek defensiveness in its defense.

Figure2_Healthcare

Figure 2

The story for taxation is similar. We polled the two pillars of the 2017 Republican tax bill: A majority think that the tax rate for households earning more than $250,000 should be higher (55 percent) and that the tax rate for corporations should be higher (58 percent). Even more, only a small minority of Republicans support lower taxes for high income households (17 percent) and corporation (17 percent), despite dramatic tax cuts being the main accomplishment of the Trump administration. Yet again, the partisan preference on taxes looks very different: Just 39 percent prefer the tax policies of Democrats, en par with 39 percent supporting the Republican policy. Not exactly overwhelming considering the differing levels of support for the actual tax policies of the two parties.

Figure3_Taxes

Figure 3

Not surprisingly, the story for non-economic issues is slightly different. Take gun safety for instance. Our polling documents majority support for Republican policies (reciprocal conceal & carry), and Democratic policies (restricting the amount of bullets), and immigration is decisively less popular, with the exception of the fate of Dreamers (52 percent support a pathway to citizenship and an additional 26 percent support allowing Dreamers to stay and work). Yet, we believe this set of polling comes with a set of clear lessons for Democratic candidates for this coming election cycle: Run on economic issues, foremost taxation and healthcare, and, we feel crazy having to say this, but make sure voters understand what the Democratic party stands for. The vast overlap between public opinion and progressive policies could be fruitful grounds on which Democrats can take back the House 2018.

Note on Methods: We collect the data via smartphone with Pollfish. At the time, this approach was a new and exciting way to collect data (and possibly the only useful mode going forward, as landlines become obsolete). Since we did not have a representative sample of Americans, we made use of the advent of Big Data and advances in machine learning and statistics to process the raw data and get representative estimates not only for Americans overall, but for more fine-grained demographic categories. Today,  this methodology is well validated by the academic community, our prediction of the 2016 general election, and a number of validation studies.

Read the whole story
lahosken
10 days ago
reply
San Francisco, USA
Share this story
Delete

Math Can’t Solve Everything: Questions We Need To Be Asking Before Deciding an Algorithm is the Answer

1 Share

Across the globe, algorithms are quietly but increasingly being relied upon to make important decisions that impact our lives. This includes determining the number of hours of in-home medical care patients will receive, whether a child is so at risk that child protective services should investigate, if a teacher adds value to a classroom or should be fired, and whether or not someone should continue receiving welfare benefits

The use of algorithmic decision-making is typically well-intentioned, but it can result in serious unintended consequences. In the hype of trying to figure out if and how they can use an algorithm, organizations often skip over one of the most important questions: will the introduction of the algorithm reduce or reinforce inequity in the system?

There are various factors that impact the analysis. Here are a few that all organizations need to consider to determine if implementing a system based on algorithmic decision-making is an appropriate and ethical solution to their problem:

  1. Will this algorithm influence—or serve as the basis of—decisions with the potential to negatively impact people’s lives?

Before implementing a decision-making system that relies on an algorithm, an organization must assess the potential for the algorithm to impact people’s lives. This requires taking a close look at who the system could impact and what that would look like, and identifying the inequalities that already exist in the current system—all before ever automating anything. We should be using algorithms to improve human life and well-being, not to cause harm. Yet, as a result of bad proxies, bias built into the system, decision makers who don’t understand statistics and who overly trust machines, and many other challenges, algorithms will never give us “perfect” results. And given the inherent risk of inequitable outcomes, the greater the potential for a negative impact on people’s lives, the less appropriate it is to ask an algorithm to make that decision—especially without implementing sufficient safeguards. 

In Indiana, for example, after an algorithm categorized incomplete welfare paperwork as “failure to cooperate,“ one million people were denied access to food stamps, health care, and cash benefits over the course of three years. Among them was Omega Young, who died on March 1, 2009 after she was unable to afford her medication; the day after she died, she won her wrongful termination appeal and all of her benefits were restored. Indiana’s system had woefully inadequate safeguards and appeals processes, but the the stakes of deciding whether someone should continue receiving Medicaid benefits will always be incredibly high—so high as to question whether an algorithm alone should ever be the answer. 

Virginia Eubanks discusses the failed Indiana system in Automating Inequality, her book about how technology affects civil and human rights and economic equity. Eubanks explains that algorithms can provide “emotional distance” from difficult societal problems by allowing machines to make difficult policy decisions for us—so we don’t have to. But some decisions cannot, and should not, be delegated to machines. We must not use algorithms to avoid making difficult policy decisions or to shirk our responsibility to care for one another. In those contexts, an algorithm is not the answer. Math alone cannot solve deeply-rooted societal problems, and attempting to rely on it will only reinforce inequalities that already exist in the system.

  1. Can the available data actually lead to a good outcome?

Algorithms rely on input data—and they need the right data in order to function as intended. Before implementing a decision-making system that relies on an algorithm, organizations need to drill down on the problem they are trying to solve and do some honest soul-searching about whether they have the data needed to address it.

Take, for example, the department of Children, Youth and Families (CYF) in Allegheny County, Pennsylvania, which has implemented an algorithm to assign children “threat scores” for each incident of potential child abuse reported to the agency and help case workers decide which reports to investigate—another case discussed in Eubanks’ book. The algorithm’s goal is a common one: to help a social services agency most effectively use limited resources to help the community they serve. To achieve their goal, the county sought to predict which children are likely to become victims of abuse, i.e., the “outcome variable.” But the county didn’t have enough data concerning child-maltreatment-related fatalities or near fatalities to create a statistically meaningful model, so it used two variables that it had a lot of data on—community re-referrals to the CYF hotline and placement in foster care within two years—as proxies for child mistreatment. That means the county’s algorithm predicts a child’s likelihood of re-referral and of placement in foster care, and uses those predictions to assign the child a maltreatment “threat score.”

The problem? These proxy variables are not good proxies for child abuse. For one, they are subjective. As Eubanks explains, the re-referral proxy includes a hidden bias: “anonymous reporters and mandated reporters report black and biracial families for abuse and neglect three and a half more often than they report white families"— sometimes even by angry neighbors, landlords, or family members making intentionally false reports as punishment or retribution. As she wrote in Automating Inequality, “Predictive modeling requires clear, unambiguous measures with lots of associated data in order to function accurately.” Those measures weren’t available in Allegheny County, yet CYF pushed ahead and implemented an algorithm anyway. 

The result? An algorithm with limited accuracy. As Eubanks reports, in 2016, a year with 15,139 reports of abuse, the algorithm would have made 3,633 incorrect predictions. This equates to the unwarranted intrusion into and surveillance of the lives of thousands of poor, minority families.

  1. Is the algorithm fair?

The lack of sufficient data may also render the application of an algorithm inherently unfair. Allegheny County, for example, didn’t have data on all of its families; its data had been collected only from families using public resourcesi.e., low-income families. This resulted in an algorithm that targeted low-income families for scrutiny, and that potentially created feedback loops, making it difficult for families swept up into the system to ever completely escape the monitoring and surveillance it entails. This outcome offends basic notions of what it means to be fair.  It certainly must not feel fair to Allegheny County families adversely impacted.

There are many measures of algorithmic fairness. Does the algorithm treat like groups similarly, or disparately? Is the system optimizing for fairness, for public safety, for equal treatment, or for the most efficient allocation of resources? Was there an opportunity for the community that will be impacted to participate in and influence decisions about how the algorithm would be designed, implemented, and used, including decisions about how fairness would be measured? Is there an opportunity for those adversely impacted to seek meaningful and expeditious review, before the algorithm has caused any undue harm?

Organizations should be transparent about the standard of fairness employed, and should engage the various stakeholders—including (and most importantly) the community that will be directly impacted—in the decision about what fairness measure to apply. If the algorithm doesn’t pass muster, it should not be the answer. And in cases where a system based on algorithmic decision-making is implemented, there should be a continuous review process to evaluate the outcomes and correct any disparate impacts.

  1. How will the results (really) be used by humans?

Another variable organizations must consider is how the results will be used by humans. In Allegheny County, despite the fact that the algorithm’s “threat score” was supposed to serve as one of many factors for caseworkers to consider before deciding which families to investigation, Eubanks observed that “in practice, the algorithm seems to be training the intake workers.” Caseworker judgment had, historically, helped counteract the hidden bias within the referrals. When the algorithm came along and caseworkers started substituting their own judgment with that of the algorithm, they effectively relinquished their gatekeeping role and the system became more class and race biased as a result

Algorithmic-decision making is often touted for its superiority over human instinct. The tendency to view machines as objective and inherently trustworthy—even though they are not— is referred to as “automation bias.” There are of course many cognitive biases at play whenever we try to make a decision; automation bias adds an additional layer of complexity. Knowing that we as humans harbor this bias (and many others), when the result of an algorithm is intended to serve as only one factor underlying a decision, an organization must take care to create systems and practices that control for automation bias. This includes engineering the algorithm to provide a narrative report rather than a numerical score, and making sure that human decision makers receive basic training both in statistics and on the potential limits and shortcomings of the specific algorithmic systems they will be interacting with. 

And in some circumstances, the mere possibility that a decision maker will be biased toward the algorithm’s answer is enough to counsel against its use. This includes, for example, in the context of predicting recidivism rates for the purpose of determining prison sentences. In Wisconsin, a court upheld the use of the COMPAS algorithm to predict a defendant’s recidivism rate on the ground that, at the end of the day, the judge was the one making the decision. But knowing what we do about the human instinct to trust machines, it is naïve to think that the judge’s ‘inherent distraction’ was not unduly influenced by the algorithm. One study on the impact of algorithmic risk assessments on judges in Kentucky found that algorithms only impacted judges’ decision making for a short time, after which they return to previous habits, but the impact may be different across various communities of judges, and adversely impacting even one person is a big deal given what’s at stake—lost liberty. Given the significance of sentencing decisions, and the serious issues with trying to predict recidivism in the first place (the system “essentially demonizes black offenders while simultaneously giving white criminals the benefit of the doubt”), use of algorithms in this context is inappropriate and unethical.

  1. Will people affected by these decisions have any influence over the system?

Finally, algorithms should be built to serve the community that they will be impacting—and never solely to save time and resources at whatever cost. This requires that data scientists take into account the fears and concerns of the community impacted. But data scientists are often far removed from the communities in which their algorithms will be applied. As Cathy O’Neil, author of Weapons of Math Destruction, told Wired earlier this year, “We have a total disconnect between the people building the algorithms and the people who are actually affected by them.” Whenever this is the case, even the most well-intended system is doomed to have serious unintended side effects

Any disconnect between the data scientists, the implementing organization, and the impacted community must be addressed before deploying an algorithmic system. O’Neil proposes that data scientists prepare an “ethical matrix” taking into account the concerns of the various stakeholders that may be impacted by the system, to help “lay out all of these competing implications, motivations and considerations and allows data scientists to consider the bigger impact of their designs.” The communities that will be impacted should also have the opportunity to evaluate, correct, and influence these systems.

***

As the Guardian has noted, “Bad intentions are not needed to make bad AI.” The same goes for any system based on algorithmic decision-making. Even the most well-intentioned systems can cause significant harm, especially if an organization doesn’t take a step back and consider whether it is ethical and appropriate to use algorithmic decision-making in the first place. These questions are just starting points, and they won’t guarantee equitable results, but they are questions that all organizations should be asking themselves before implementing a decision-making system that relies on an algorithm.

 



Read the whole story
lahosken
11 days ago
reply
San Francisco, USA
Share this story
Delete

Worried About Risky Teenage Behavior? Make School Tougher

1 Share

The following originally appeared on The Upshot (copyright 2018, The New York Times Company). It also appeared on page A19 of the May 1, 2018 print edition.

Like all parents of teenagers, I worry that my children will engage in risky behavior, including drinking, smoking and drug use. The more time they spend doing healthier extracurricular activities — soccer, piano, cleaning their rooms (ha!) — the better.

But it turns out that what they do in school can also affect their choices outside the classroom.

Between 1993 and 2013, 40 states and the District of Columbia increased graduation requirements: a specified number of courses in each subject necessary for a high school diploma. The increases have been most common in mathematics and science, and may partly explain the growth in college majors in STEM fields. In 1993, states required between two and six math and science courses for high school graduation. By 2014, the range was four to eight.

A paper in the American Journal of Health Economics suggests a connection: Some of the reduction in risky behavior by teenagers is driven by greater academic demands at school.

Zhuang Hao, an economics Ph.D. candidate, and the economist Benjamin Cowan, both at Washington State University, examined the number of math and science courses that states required for a high school diploma and the relationship to risky behavior among high school students. Their data spanned the years 1993 to 2011 and included over 100,000 students across 47 states (excluding Colorado, Nebraska and Iowa, which did not participate in the surveys of students upon which the analysis relies).

According to the study, these increases in state math and science high school graduation requirements reduced alcohol consumption without any offsetting increase in marijuana or cigarette use. More demanding academic standards decreased the number of days teenagers drank as well as the rate at which they engaged in binge drinking (defined as more than five drinks at a time).

For each additional math or science course required of high school students, the probability they drank or binge drank fell 1.6 percent. The results are a bit larger for males and for nonwhite students.

The study by Mr. Hao and Mr. Cowan could not explain why graduation requirements have larger effects on the behavior of male and nonwhite students. One possible explanation is that male teenagers binge drink more than females, presenting a greater opportunity for reduction.

But that explanation breaks down for nonwhites because they drink less than white students. Here, the explanation could be that nonwhite students are more affected by state requirements because they are more likely to take the minimum number of courses required or attend schools with standards that were lower before the state laws passed. Another study, focused on future earnings, also found that male and nonwhite students benefited more from higher math standards for high school graduation.

One concern is that the results could be explained by high school dropouts. Previous research found that higher requirements encouraged some children to drop out of school and, therefore, out of the study sample. Other work shows that staying in school reduces smoking rates and delays drinking. If those who drop out are the ones more likely to drink, that could skew the findings. However, dropout rates are very low for students under 17 because they are required by law to attend school until that age. The study findings hold up when examining only this younger group.

The study doesn’t explain why greater graduation requirements might reduce risky behavior, but the authors offer two hypotheses. First, greater demands at school take more time: longer hours doing homework and studying. Students who spend more time on schoolwork have less time to do other things. (They could also be reducing sleep or exercise, however.)

Second, a number of other studies have shown that increased high school math graduation requirements have been linked to higher future earnings. This increases the potential loss to students who jeopardize those future earnings through risky behavior. (This second theory requires teenagers to be forward-looking, something parents of teenagers might find implausible.)

Rates of alcohol, drug, and tobacco use among teenagers are high enough to provoke concern among their parents. According to the Centers for Disease Control and Prevention, about one-third of high schoolersconsumed alcohol in the last month, and 18 percent had five or more drinks when they did so. One in five had used marijuana in the past month, and more than 5 percent had used cocaine or hallucinogenic drugs. Just over 10 percent of high schoolers smoke cigarettes.

Nevertheless, by some measures, teenagers are engaging in less risky behavior than they used to. Their rates of alcohol and cigarette use have trended downward since the early 1990s, though use of other drugs is up.

Reducing risky behavior early in life is important because habits established in youth often persist into adulthood. Deterring those behaviors early has long-term benefits. A clever body of work takes advantage of school choice lotteries, in which families who win the lottery can place their children in their preferred — typically higher-quality — school. For example, one study of the Charlotte-Mecklenburg, N.C., school district found that lottery-winning middle school and high school enrollees entered higher-quality schools and committed less crime seven years later.

There are other ways the education system can help children be and stay healthy. There are many evidence-based programs that schools can use to directly address the factors that drive or deter substance use. One studyfound that higher teacher wages are associated with lower mortality. A study of Southern states found that decreasing student-teacher ratios, increasing teachers’ wages, and lengthening the school year are all associated with better future health of students, including reduced smoking, obesity and mortality.

There is variation in results of studies like these. Not all find a connection between education and health or behavioral outcomes. It’s also reasonable to be concerned that when connections are found, correlations are not causation.

But many studies in this area exploit natural experiments: events that are effectively random and not within the control of study participants. “Changes in school quality were generally not the direct choices of families and local communities,” said Ezra Golberstein, an associate professor at the University of Minnesota School of Public Health and a co-author of the study of Southern states. “That increases our confidence that the findings are causal.”

These aren’t the only ways to deter risky behavior by teenagers. Increasing taxes on alcohol and tobacco products also has that effect. But more rigorous demands at school — as well as approaches like lengthening the school year — may deter students from a broader set of risky behavior while better preparing them for higher-wage jobs, things that taxes alone cannot do.

@afrakt

Share

Read the whole story
lahosken
17 days ago
reply
San Francisco, USA
Share this story
Delete

Still waiting for someone to point out SF having built massive edifices on vulnerable foundations is also the story of the internet.

1 Share

Still waiting for someone to point out SF having built massive edifices on vulnerable foundations is also the story of the internet.


Posted by krave on Wednesday, April 18th, 2018 10:34pm


4 likes, 1 retweet
Read the whole story
lahosken
30 days ago
reply
San Francisco, USA
Share this story
Delete

The 128-Language Quine Relay

1 Share
image

Yusuke Endoh has created an astonishing 128 language Ouroborous quine. This Ruby program produces a Rust program, which produces a Scala program, which produces a Scheme program, and so on, progressing through another 124 languages to return to its original state. To understand what this is and how it came to be, we first need to consider the quine.

A quine is a program that prints its own source code to the screen. The term comes from philosopher Willard Van Orman Quine’s paradox “Yields falsehood when preceded by its quotation,” which fails to yield falsehood when preceded by its quotation, making it neither true nor false. The quine in code is named not in reference to the paradoxical quality of the statement, but to its embedding of the quote in itself, where it acts as both the subject and expression of the sentence. The quine program, likewise, flattens the reading of the program with the running of the program. If one were to print its source code to the screen or execute it, the result is the same.

Since this is a fancy way of saying “it prints its own source code,” one might expect such a program to be simple. However, even concise quines in fairly straightforward languages, like Python, are strange looking:

image

An Ouroborous quine, like Endoh’s Quine Relay cycles through many languages. We can think of an ordinary quine as an Ouroborous quine of only one language. Its primary feature is ending up back where it started, the snake eating its own tail. The multi-language Ouroborous is a cousin of the ployglot, the program which runs in more than one language, since each iteration of the Ouroborous quine holds information for the other steps, passively, until those steps are executed. At any one stage, changes to the program could affect any other step in the chain. Thus, adding a new language into the quine relay often involves touching nearly every line of the program. For an example of what that looks like, here’s a tiny excerpt from the diff for adding the Haxe language, between Haskell and Icon:

image

Although the code may appear inscrutable, the syntax of the quirkier languages are more apparent (see the “PLEASE” for INTERCAL in line 51). The entire check-in can be seen here.

According to Endoh, the most challenging transitions were Befunge to BLC8 to brainfuck, as all three are esoteric. It runs through languages strictly in alphabetical order, so there was no opportunity to group easier-to-work-with languages together. BLC8 functions bit-wise, which meant finding a byte-aligned encoding way to work with it, to feed brainfuck. Other esolangs presented challenges as well; Piet, the language that uses images as source code (read the interview with Piet’s creator here) was a bit easier as it came after Perl 6, which bundles Zlib as standard library, making it straightforward to generate a PNG file. Had it followed, say, brainfuck, it would have been a much larger challenge.

Apart from his work in the esoteric space, Endoh is a programmer who helped develop Ruby (which explains why the Quine Relay begins with that language). He served as the release manager for Ruby 2.0 and developed a number of Ruby’s key features. His fascination with quines began with what he describes as “artistic obfuscation” and following the International Obfuscated C Code Contest (IOCCC):

I met Don Yang’s IOCCC 20000 winning entry. This is three-phase Quine.  The shapes of each phase are three Japanese words, “aku”, “soku”, and “zan”. It was really interesting to me, so I studied it deeply, and I started writing similar programs in Ruby and C.

Endoh has gone on to win 14 IOCCC awards, two of them just last week in its latest iteration, making him the most-awarded programmer in the history of the contest. You can find more of his work on GitHub, including a radiation-hardened quine (where any single character in the program can be removed and it still works), an encoding of Ruby using only underscore and space, and Ruby encoded into DNA sequences.

Read the whole story
lahosken
32 days ago
reply
San Francisco, USA
Share this story
Delete
Next Page of Stories