Data for Good
Every day, private sector companies capture valuable information in cities. When someone checks out on a food delivery app, swipes a credit card at the bodega or taps their phone to pay for a taxi, we and other companies capture and analyze that transaction data to reveal important trends.
Cities themselves also aggregate important data, including on how their citizens get around, school attendance, and where people receive treatment when they become ill.
For urban policymakers, the ability to tap into both private and public sector data can be powerful. There is now an abundance of aggregated information with the potential to transform city policies and impact measurements and potentially improve all sorts of socioeconomic outcomes. The hope is that better use of data will lead to better policymaking to meet some of the growing challenges of cities, such as increasing inequality, changing demographics and strained public services, to name but a few.
The Mastercard Center for Inclusive Growth is working in partnership with the Urban Institute in Washington, D.C. to develop data-driven metrics and methodologies to better answer key questions related to inclusive economic growth in U.S. metropolitan areas. The project will combine Mastercard insights with public data to create new indicators and identify new evidence to support and monitor equitable development efforts.
Sandy Fernandez, the Center’s director of programs for the Americas, spoke with Solomon Greene, senior fellow at the Urban Institute, about how the private sector can leverage its data and analytics capabilities to drive positive social impact in partnership with governments and nonprofits.
Sandy Fernandez: As more cities embrace smart innovations and the internet of things, how can we ensure social equity is built into these types of data-driven technology models?
Solomon Greene: Technology has the potential to create huge volumes of data, but it is always important to think about who the data leave out and what biases exist. For example, if a city uses data derived from a smartphone application, whether people are left out may not be a matter of whether they have a smartphone, but whether they use that specific app. Exclusion from the data that are used in decision-making can create and exacerbate inequalities.
At the same time, once you’ve accounted for any biases in the data, big data sources can answer equity questions that we have never been able to address before. It opens up whole new fields of research. For example, Twitter data has been used to better understand patterns of racial segregation and perceptions of neighborhood quality. Data collected by online real estate companies like Trulia can be used to understand housing affordability and who gets priced out of gentrifying neighborhoods and high-cost cities.
Fernandez: Can you talk about how you think about equity, and how it applies to data-driven urban development?
Greene: An equitable city is one that ensures no one is left behind. That involves ensuring that all people’s needs are met, with well-functioning safety nets and intentional efforts to alleviate poverty. At the same time, an equitable city ensures that everyone, regardless of gender, race, sexual orientation or disability, has a fair shot. There needs to be equality of opportunity so that everyone can contribute to and benefit from a city’s prosperity. For a city to become truly equitable, both of those goals must be advanced.
Fernandez: What are the challenges you face when working with private-sector data?
Greene: Before we begin to work with private-sector data, we need to ensure data security and privacy. Data security is obviously paramount, because as the field of data philanthropy develops, and more and more private companies become interested in how they can use their data for the public good, they have to be absolutely reassured that their data is going to be secure. And of course, privacy is essential to protecting the rights of consumers and building trust. One way to protect privacy is by ensuring anonymized data are shared at a sufficient level of aggregation that individual consumers can never be identified.
We also need to understand what is included and what is not included in the dataset, and that goes back to the question of bias validation against existing data sources. Validation represents both a challenge and an opportunity. We can cross-reference administrative data and other public record datasets with private-sector data to judge the effectiveness of private-sector data sources, but we can also use private-sector data to validate and provide nuance to public-sector data.
Fernandez: What does the landscape look like for nonprofits in terms of their ability to use their own data and access other datasets?
Greene: Virtually all nonprofits that work with individuals in any capacity, be that as housing providers, service providers or community organizers, have access to a wealth of data. They can use that data to improve their own performance and determine how they can better serve their clients and communities. They can also shift the conversation more publicly to influence policy that may affect their communities.
From the Urban Institute’s work with local nonprofit organizations in the Washington, D.C. area through Measure4Change, we have learned that nonprofits often don’t maximize their datasets as a tool for advocacy or public education. We have helped them create data visualizations so they can use their data to tell compelling stories.
We are also seeing nonprofits take advantage of many of the same technology tools that we tend to associate with private-sector firms. As a result, they are creating the type of big data that so many policymakers and researchers are eager to tap into to answer a host of questions. A great example of that is the Crisis Text Line. They have made aggregated and anonymized data publicly available on a website, so that researchers, policymakers and other service providers can look at trends within their state to figure out where and how they can improve interventions.
Nonprofits have a wealth of data at their disposal. Often the challenge is capacity. Another challenge is privacy. Privacy protections are critical when you are dealing with data on individuals. For nonprofits, privacy concerns can be a big barrier when thinking about using or sharing data. Nonprofits should be sharing information with each other about best practices in how to preserve client privacy.
Fernandez: How can organizations build up internal capacity to gain the know-how to be able to analyze their data and gain insights from it?
Greene: There is a real opportunity for philanthropy and public-sector institutions to support and fund nonprofit organizations, and to partner with them to think about how to use data as an asset. Obviously, many nonprofits don’t have the resources to hire their own data scientists. However, there are pro-bono models that can be used for nonprofit organizations that could help them tap into their datasets. One example is DataKind, which brings together top data scientists on a pro-bono basis with leading social change organizations to collaborate on cutting-edge analytics and advanced algorithms to maximize social impact.