With Great Tech Comes Great Responsibility

With Great Tech Comes Great Responsibility

Insights from Tech Workers Today

Speaking with those in the industry today can be an invaluable way to prepare for a job in tech. Compiled here are insights from tech workers, academics, entrepreneurs, and former tech workers. Reflecting on their careers and the ethical questions with which they have wrestled, they shared a range of insights helpful to students hoping to create more human-focused tech. Responses below are based on each individual’s experience, but collectively can provide further context for ethical tensions throughout the tech industry.

Illustration of people building things together

A former big tech worker who founded a nonprofit focused on tech ethics had a lot to share. Looking back on his post-grad recruiting experience, he wants to encourage students to look beyond corporate jobs. “All companies care about is maximizing profits,” he shared. “If more technical students worked for nonprofits or in the public sector, civil society would be stronger and we’d have more of a counterbalance to corporations.” He also wanted to share with students that the venture capital model incentivizes negative behavior and growth-at-all-costs behavior. Students should consider that for-profit business models that don’t take VC funding may be healthier than VC-backed startups. Similarly, in his experience, companies who are listed on public markets are typically more accountable than private companies. There are also a number of nonprofit or government actors researching AI and building products that are worth exploring as students begin their careers in tech.

All companies care about is maximizing profits

A former big tech worker

These days, there’s a lot of talk about the “ethical use of AI.” One expert suggested that instead of thinking about ethics, workers and companies need to spend more time developing operational discipline and accountability systems when developing products and for product-use. This former tech worker who now works in academia recommends tech companies need to look beyond internal stakeholders and bring in independent, external parties to look critically at product operations and oversight. For example, if you are about to launch a product, what’s the process in which that product launches? How do you test its relationship to existing unethical systems? How does identity link into the product you are building? Also, how we characterize identity given it is fluctuating and complex?

In addition to these questions, it is important to note that sometimes conversations around ethics can be limited in scope. Thus, encouraging companies to listen to a range of stakeholders can push companies to go beyond infusing work with ethical guidelines. It can place a focus on allowing individuals impacted by the technology to be involved in how tech is developed, launched, and utilized. Those who will be most impacted have the greatest stake in the technology and thus must be provided the space and power to push back. There may not be a perfect system to avoid ethical challenges, but involving multiple perspectives is a key way to ensure folks have the opportunity to address a problem as soon as they see it.

Students applying to or starting jobs in large tech companies should educate themselves on governance inside the company. Are coworkers sharing information with one another, or is information being kept by one team alone? What’s the difference between the public and private rationale for sharing information? Is there information flow across the company outside of management? What are the internal mechanics for pursuing an idea? Does the internal bureaucracy work to support that or prevent tracking the flow of ideas? Can workers seek clarification from management and coworkers about the products the company is working on and how those products are being used? What protections are in place for workers who may have a different opinion from management about a new tool?

If these answers are difficult to track down, it’s possible to do your own information gathering across internal company resources. You can also ask management what safeguards, if any, are in place to make sure no harm is caused by the product you’re developing. It’s important to realize that the information you have access to inside the company is information that the general public might never see, so gathering information and sharing it with your coworkers can be crucial. Doing so can start a necessary dialogue among workers about some of the ethical challenges of developing a particular product. Only these types of conversations can help workers decide if and how the company should continue to be involved in the development of a product.

At startups, questions about ethical contracts are likely to have a more direct impact on the bottom line than at large companies. While you will have more direct access to decision makers to discuss concerns, smaller companies are more likely to be in a financial crunch and possibly resistant to making decisions for ethical reasons.

Business leaders also need to understand the long-term impacts of doing unethical work. Writing in The New York Times, Kevin Roose pointed to an example of how Dow Chemical’s $4 million research and development contract to help the military develop napalm during the Vietnam War cost the company many times that amount in long term brand reputation. Open dialogue and ethical choices early on can help companies avoid these pitfalls and their associated economic costs down the road.

Tech companies, despite their public stances to the contrary, are far from apolitical actors. Google, for example, sought to forbid political discussion at work until the National Labor Relations Board forced them to allow such discussions. Earlier this year, Wayfair staff walked out in protest of the company’s contract to sell furniture to immigrant detention centers and, in response, its CEO said he’d like staff that join the company to be “non-political.”

Additionally, algorithms and databases are inherently political. As policymakers continue to rely on AI to make decisions, the human biases coded into these new technologies have real world political ramifications. For example, an algorithm designed by Northpointe was used by parole authorities in the United States to predict a criminal defendant’s likelihood of recidivism. The algorithm was incorrectly judging Black defendants as having a higher risk of recidivism than white defendants. Human biases that are programmed into the software deepens systemic discrimination.

It’s critical for students, and society at large, to identify and recognize the importance of transparency and oversight when developing and implementing AI technologies to ensure social harms are not compounded and legitimized under the guise of technological efficiency.