Reading 14: Computer Science Education

Coding is not the new literacy; not everyone needs to understand computer science. The readings from this week describe the recent push in political and popular culture to get more people, especially young people, taking classes in coding and computer science. I don’t think there is a problem with exposing children to the basics of the field, but we should not prioritize coding on the same level as reading, writing, math, or history. That is not to downplay the importance of computers. Everyone should learn how to use a computer, but not everyone needs to know how to program one in the same way that not everyone needs to know how to build a car in order to drive one.

If we do continue in our push to bring computer education to K-12 curriculum, we should begin with basic computer literacy – how to navigate and use a computer as a typical user. This is likely not something that would require significant faculty resources but would require student access to computers. I think this is a worthwhile investment as being able to use a computer is extremely useful in practically all areas of study and this education could thus be incorporated into existing programs. If the curriculum goes beyond this, the next step, in my opinion, should be computational thinking as this is once again a skill that generalizes well to many situations in life. Here is where we must begin to be careful computer science education does not replace existing subjects because though computational thinking may be useful in areas outside of computing, I do not think it has the same intrinsic value as the traditional subjects. I believe that the recent push for computer science education has been at least as much about trying to groom a workforce for the technology industry as it has providing young students with a balanced education. Education, especially at the the youngest level, should be about preparing students for life, not the job market. All we need at this level is enough exposure to get children comfortable to using computers and maybe, if they’re interested, connect them with the resources they need to go deeper on their own.

On a somewhat unrelated point, I do think anyone can learn to program. Some people are perhaps born with a natural predisposition to want to program, which is a huge advantage, but anyone is capable of learning the skills necessary to code just as anyone is capable of learning the skills necessary to read, write, or do math. As I touched on before, I do not think this means everyone should learn to program if they do not want to. I had the luxury of being able to go to school to study what I am passionate about and the fortune of having a marketable passion. Very few people have these, so it is difficult for me to say everyone should go to college and do what he or she is passionate about. With this in mind, I believe it is a good thing that anyone can learn to program and that there are so many resources and levels of effective education out there for computer science. Everyone should have the opportunity to study computer science, but not everyone needs to take it.

Reading 12: Self-Driving Cars

The motivation for developing and building self driving cars is ostensibly to make travel more convenient and safer. It is likely also related to cutting costs in driving-related industries such as shipping and taxi services. Ideally, this technology will make personal travel safer and more convenient. If we get to a point where self-driving cars can perceive the driving environment as well as or better than a human being, they will undoubtedly become better drivers than humans, speeding up travel and leading to fewer accidents, which saves money and lives. While these perfect autonomous drivers are driving, we’ll be able to take a nap or get some work done, which would make car travel a much more pleasant experience.

The major problems with self-driving cars are their currently far-from-perfect technology and impracticality in most situations. The deaths of Elaine Herzberg and Walter Huang demonstrate that the technology of autonomous vehicles has a long way to go in interpreting and navigating basic road conditions, and it would be extremely irresponsible to roll out a large-scale deployment of vehicles with this technology without keeping a vigilant human driver behind the wheel. There are also few situations in which we have anything to gain by using autonomous vehicles. In urban areas it would be much more cost-effective and environmentally responsible to instead invest in better public transportation and non-motorized travel (bike and pedestrian lanes and the like) and rural areas with limited traffic have little to gain in replacing human drivers with machines. The only good use I can see for autonomous vehicles is for trucking and other long trips, but since most accidents occur close to a person’s home, there is once again little to gain in this area.

That being said, I do think sufficiently advanced self-driving technology would make our roads safer. I would trust a decent self-driving car much more than my fellow human drivers to remain focused and react quickly to unexpected situations. Even the simple self-braking and lane-keeping features that are integrated into new cars would go a long way in preventing accidents should they become more common. I think the discussion of the vehicle making moral judgements in the manner of the trolley problem is misplaced and unproductive. Neither I nor anyone I have known of or heard of has been in a driving situation that necessitated choosing between the life of a pedestrian and the life of the driver. Even if people have been in this situation, I doubt they had the time and clarity of mind to make moral calculations and then enact their plan. There is no reason to expect a machine could or should be programmed to do so. More interesting and productive is the question of who is liable when an accident does occur. In the current state of technology, I would say at least some responsibility needs to be given to the human behind the wheel who should have been ready to take the wheel in case of unexpected behavior, but going forward more and more of the responsibility will have to fall on the manufacturer as more and more of the driving is done by the car.

I don’t think I would want a self-driving car. Not yet. After a couple generations of the technology have passed and it becomes more reliable, I would certainly be open to investing in one, but I don’t think I could ever get into one without a steering wheel. At the end of the day, the technology is never going to be perfect just as we are not. The best solution for the near future seems to be human control augmented with computerized safety features. This improves safety, allows drivers to keep their jobs, and lets the human diver keep responsibility for his or her behavior and safety.

Reading 11: Automation and the Future of Work

Automation is changing, will continue to change, and will eventually end employment as we know it. The increased use of computers to do jobs traditionally performed by people is eliminating the need for human roles in every industry to healthcare. This is a transformation that has been in progress for a long time, Derek Thompson’s Atlantic article mentions this idea of automation ending work’s being known as the “Luddite fallacy”, a reference to a 19th-century incident in which “British brutes who smashed textile-making machines at the dawn of the industrial revolution, fearing the machines would put hand-weavers out of work”.

The Luddites were right. If humanity continues to follow the trend its followed from its beginning, we will continue to make better and better tools to do jobs better than we can. This is nothing new. What is (relatively) new is the fact that our tools can do our jobs without us being involved at all, and they can often do more efficiently and more cheaply than human workers. The natural result of this trend of human ingenuity is a world in which we do not need to work. We could try to slow this process with government regulation designed to protect jobs from automation, but we will not be able to stop it. History shows anyone who stands in the way of progress, for better or worse, cannot hold his ground forever.

Assuming this premise, the question becomes what are the implications of the end of work and how should we respond to it? A complete and sudden disappearance of jobs considered easily-automated would be a disaster for everyone except those at the very top of the economic ladder in our current society. This, of course, would end up being bad for everyone as consumers without income are unable to consume, and the economy itself would collapse, putting our robot replacements out of work and triggering the collapse of society as we know it. This is obviously a completely unrealistic scenario, but the reality is a world without work (or work on the scale at which it currently exists) cannot look like the world we live in today.

But I believe it can look much better. Setting aside the financial implications for a moment, several of the articles for this week addressed the loss of self-worth and community that come with being unemployed. This concern stems from a painfully puritan view of work, and I imagine these ideas would disappear along with the social stigma of unemployment in a world in which most people are not employed in full-time work. Thompson’s article proposes a number of different ways people can find meaningful uses of their time without trying to make money. A world without work could be a world in which people pursue higher education, spend more time raising families, get involved in their local communities, learn new skills, and, yes, engage in leisure activities such as watching television. To say work has to exist so we have something productive to do with our time is ridiculous and completely devalues the human experience.

The big problem, of course, is how we will get the resources to support ourselves when our jobs are gone. I think a Universal Basic Income is a natural and effective solution to this problem. Basic income will be necessary from a moral standpoint simply to allow people the necessities of life but also from a practical standpoint because, as mentioned earlier, a world without consumers is a world in which automation is pointless and unprofitable. Yes, basic income would be expensive, but the only real, ethical alternative is the continued existence of mass employment. If automation is going to take our jobs, then we need a share in the value it generates. If automation ultimately does not serve the good of all of us, what does it serve? The good of a privileged few? This is neither desirable nor sustainable.

To conclude, I believe automation is ultimately a good thing for humanity. It allows us to meet our needs more efficiently, which hopefully means it will one day help us meet the needs of everyone. The end of work could be a new, beautiful chapter in human history. It is the responsibility of the developers and owners of new technology to steer innovation toward this future, the responsibility of our leaders to establish the framework for it, and the responsibility of all of us to demand everyone has a fair share in it.

Reading 10: No Comment

Trolling, from what I understand, is the practice of using online platforms to harass a person or group of people. This harassment can take the form threatening/hateful/explicit messages or images as well as stalking and “doxxing”, the practice of releasing the personal details of a person online without their consent. Trolling is, in my opinion, a nuisance at best and deeply dangerous at its worst. A couple of the articles from this week’s reading refer to people who have had to leave their homes after having had their addresses shared in online communities and having received death threats against both themselves and their families. It is just as dangerous when the harassment is more local. The NY Times article on Yik Yak emphasizes how high school and even middle school students have to deal with this behavior, also known as cyberbullying on an alarmingly regular basis.

I believe that the companies that operate online platforms do not do enough to combat the spread of this type of content on their networks. Last week, I wrote a bit about how I think both the developers and users of these platforms would benefit from a better policing of threatening and hateful content. I stand by this. It is unethical for platform holders to facilitate dangerous behavior online in the name of free speech or giving voice to “the disenfranchised” as the creators of Yik Yak say. Developers have neither the power nor responsibility to hold these abusers accountable for their actions, but they do have the power and responsibility to prevent these abuses from happening on their platforms.

Gamergate is a somewhat recent example of mass trolling/cyberbullying that has received public attention. From my understanding, it was a campaign of targeted harassment against female members of the gaming industry led by a few thousand online users on sites such as 4Chan and Reddit. This was obviously a deplorable movement that these sites should have made a greater effort to curtail. The problem is not, I think, the anonymity of the users but their ability to congregate and share their toxic ideas without interference.

Like I mentioned above, I do believe that trolling/cyberbullying is a huge problem on the internet, especially for young people and children. In my opinion, children are better off not being on the internet at all and especially not on social media platforms. Maybe this is just because this was my experience, but I cannot imagine how children could benefit from being exposed to social media and especially the abuse that takes place there. I do not think children need any special protected space or protected features online – they need responsible parent who make sure their children are not participating in these online communities that are explicitly not for them.

My personal approach to online trolling is to ignore it. I have recently installed plug-ins on all of my browsers that automatically hide all comments, and they have improved my online experience immensely. I also make an effort to avoid using social media regularly. In my opinion trolling has ruined the internet and the best solution for me was to simply avoid seeing it. It is obviously not a perfect solution and only helps myself, but I do not feel I am missing out on anything and would recommend anyone else tired of toxic internet culture take similar steps.

Finally, I do not believe anonymity on the internet is the problem. As the Slate article on Google Plus’s real-name policy explains, there are legitimate reasons to stay anonymous online (hiding one’s identity from an oppressive government, for example) and when people use their real name online they do not necessarily become any nicer. In fact, real-name policies could make the internet a more dangerous place for the people who are the target of online abuse. Our online pseudonyms offer us protection from overreaching government entities and each other. The biggest problem is the fact that we need this protection. I think online dialogue in its current state may be fundamentally broken. Our brains and algorithms prioritize the production and spread of negative content, and real people are paying the price. Good can come out of online dialogue but it does not, in my opinion, justify or outweigh the evil it enables.

Reading 09: Online Censorship

Online censorship is the idea that some person or organization, be it a corporate or government entity should have a say in what can and cannot be shared online. It is a difficult issue because it clearly has the potential to come into conflict with free speech, but it is also the only means for policing the enormous amount of vile and untruthful content that exists on the internet. As ethical internet content creators, we have a responsibility to avoid spreading online hate and misinformation by avoiding generating and sharing it, but what are the responsibilities of the people who have the power to regulate the flow of online content?

The worlds’ governments clearly have a massive stake in what information is and is not censored online. Several of the articles this week focus on China in particular for its policies of blocking services in the country, asking companies to remove content that is critical of the government, and retaliating against the users who share this content. I think most people would agree that this is a gross abuse of power and is not a proper usage of online censorship. Free and honest political discussion is extremely important for holding government officials accountable for their actions and encouraging fair, representative government policies. I also agree with the referenced Google and Facebook employees who are pushing back against these companies’ plans to provide the Chinese government with greater control over user data and the tools to censor content on these platforms. These companies are pursuing profit over basic human freedoms which is completely unacceptable.

So should governments have any say in online censorship? It also seems apparent to me that we need some sort of legal framework to prevent particularly egregious content such as child pornography and groups that support hate and violence. I think this is one basic line that must be drawn in the online censorship debate. Content that threatens basic human rights can and should be removed with the backing of a government entity, but no more. Any more power can be too easily used to manipulate public opinion like in the case of China. Thus, I believe it is ethical for companies to remove information broadcast from or supporting terrorist organizations.

It does, of course, fall on companies to decide what should and should not be censored online as these are the entities that control the flow of internet traffic. I do not know if it is ethical for these companies to take further measures to censor their platforms. On one hand these are private companies that should have some say in what content is shared on their platforms, but on the other hand these platforms have become the de facto forum for public discourse and thus have an obligation to uphold the principle of free speech. I believe they would be right in removing hateful and threatening content from their services as this type of content is not beneficial to the public discourse. Online harassment and hate-speech actively make services worse for all affected users (which is bad for public discourse and the company’s bottom line) so I really do not see any problem in removing this content. The spread of disinformation is a little more complicated. Recent events have shown the public to be terrible at judging the validity of online content and foreign adversaries to be adept at exploiting this fact. Once again, I think both public discourse and online platforms benefit from strong fact-checking services, but we must be careful the people checking facts and designing fact-checking applications are actually removing lies and not merely serving their own biases.

All in all, I suppose I generally support online censorship. This is not a conclusion I expected to reach, but major online platforms have become such fertile ground for hate, harassment, and disinformation that I have to say they would benefit from content policing. There are other problems with these platforms such as their oft-discussed ideological echo chambers, but I think censorship of some of the content mentioned earlier makes them better and more responsible public forums. I believe Google’s “Good Censor” document has it right when it says “Police tone instead of content:”. Online censorship should be used to remove threats to public discourse, not the ideas that allow it to flourish.

Reading 08: Corporate Conscience

The concept of corporate personhood is the idea that, under the law, a corporation has some of the same rights, protections, and legal status as a person. In the United States, this has traditionally meant that a corporation can be prosecuted for the criminal actions of its leaders and employees and consequently has rights against unreasonable search and seizure and double  jeopardy as well as rights to due process, legal representation, and public and speedy trials. As described in the Consumerist article written by Kate Cox, this concept has recently been used to set the precedent of corporations also having freedom of speech and religion. The Atlantic article written by Kent Greenfield proposes that corporate personhood is generally a good concept because it allows corporations to be held responsible for actions for which a single person or group of people could never pay reparations (such as the 2010 Deepwater Horizon oil spill).  The article also suggests there are certain rights which corporations should not share with people such as the Fifth Amendment right to be free of self-incrimination (which they currently do not have) and the ability to spend unrestricted amounts of money on elections (which they currently do have). Greenfield supports this last point by citing the fact that this practice disproportionately skews the electoral process toward the rich and powerful and that corporate interests are rarely aligned with the public interest. Thus, corporate personhood grants a great deal of legal and social power to corporations while also providing some protections against corporate wrongdoing to the average person. Ethically, corporate personhood seems to mean corporations share some of the blame and a great deal of the punishment for the actions of its leaders and employees.

The case study in corporate personhood I chose to examine is that of the “Muslim registry” proposed by Donald Trump. I believe that tech workers are absolutely in the right for pledging not to work on a database targeting a certain set of people for their religion or ethnicity. I also think because tech workers have the privilege of generally having the ability to find a comparable job in the same field, corporations would be in the right to fire and replace employees who refuse to work on projects on account of ethical concerns. Corporations should, however, make business decisions that are moral and ethical. Obviously a Muslim registry is a bad, immoral thing, and tech companies are in the right for publicly pledging to not work on such a thing, but who gets to decide what is right for a company? It would make sense that the morality of a company is driven by the collective morality of its employees as in the case of the Muslim registry, but I do not know how this could be consistently and fairly enforced. It would also make sense for companies’ decisions to be driven by some sort of legal framework, but then what happens when the law-makers are the ones asking for unethical products? It seems there is no easy answer to the question of corporate conscience. It would be great if we could count on corporate leaders to consistently act in the public interest, but to do so would clearly be a naive mistake.

Ultimately, the best way to encourage companies to act ethically may be to place greater public and legal culpability on them and their leaders in cases of wrongdoing. Companies have a great deal of power and influence in all aspects of our life, and it is thus right that they should be expected and required to act ethically. In the case of the Muslim registry it was the workers and journalists who led the charge against injustice, but we cannot count on their responding to every immoral deed done in the name of maximizing shareholder value. The only realistic way to combat corporate immorality seems to be large-scale public outrage and a hope that those in power might listen.

Reading 07: Pervasive Computing

I do not believe that it is ethical for companies to gather users’ information and data mine it in order to sell products and services. The biggest problem with this practice in my opinion is that companies are clearly unable to properly secure this information. The security vulnerabilities described in the articles on the iCloud and Equifax data breaches are two great examples of what happens when a company has too much data on a user. Obviously, these situations are slightly different because the user consented to giving up this data and these organizations provide services to users using that data, but it does go to show that once a company gets a hand on your data, anyone can. In the case of targeted advertising, the situation is even more murky because these companies are not really providing any service to users (unless you count coupons). This results in a situation in which users are giving up their information without consent and the only ones benefiting from the transaction are the companies taking that information. This does not seem right to me. Of course, as a user there is really nothing that one can do about this, so responsibility must ultimately fall to developers to not overreach and collect more data than is necessary on users and to do everything in their power to secure that data once they have it.

Privacy is an unrealistic expectation in this era of pervasive computing. If you are using any time of digital device today, you can be almost certain that it is tracking at least your usage habits if not also your location and other peripheral data. Because this is the state of technology in 2018, it falls to the privacy-conscious user to protect his or her information by using products that explicitly value users’ privacy and to simply avoid sharing information on digital platforms except when absolutely necessary. Even if one is conscious of their digital footprint, they can never completely avoid having their data collected. They will still be tracked by their financial activity as described in the article on Target’s customer analytics, they will still have friends and family uploading their images and location data on social media, they will still have Google and the NSA cataloguing their every move on the web.

I would like to think that I am someone who is a little more than privacy-conscious than the average user. DuckDuckGo is my search engine of choice, and I make an effort to use as little social media as possible (which as a college student is still admittedly too much). I also, primarily for convenience, use an adblocker. I use these tools whenever I browse the internet because the modern web experience is a massive headache without them. Auto playing videos, pop-up ads and the like are not only annoying but can sometimes slow down or break the web page they are on. I think it is ethical to use these tools. I do have adblockers disabled for a handful of smaller independent websites I visit that serve ads tastefully, but in general I believe users should have control over the products and services they interact with. That is how the market should work – where products are made for customers, not where customers’ data and screen-time are sold as products.

Reading 06: Snowden

Edward Snowden is a hero and a traitor. In May 2013 former NSA employee Edward Snowden shared the contents of a massive number (the exact number seems to still be up for debate) of classified files on NSA surveillance efforts all over the world to three journalists in Hong Kong. These documents revealed the US had been spying on government officials and citizens, including its own citizens, by collecting data on their personal and professional communications. After sharing these documents, Snowden fled to Russia to evade charges and has remained there since.

What he did by releasing those documents was right. What the United States government was doing and continues to do is an injustice against both its citizens and those of every country under surveillance. This is not the sort of issue that could be reported to a supervisor or regulatory body; nothing would have been done because everything was legal and supported by those in power. The only way Edward Snowden could have fought this injustice is by doing exactly what he did: giving power to the people by giving knowledge to the people. The only way he could have done this is by taking the details and proof of the NSA programs to the media. This is why Snowden is a hero; he saw injustice and did what had to be done to bring that injustice to light.

He is a traitor because of what he did next. By fleeing to Russia, an adversary nation to the US, Snowden cast doubt on his intentions and therefore on the nature of the whistleblowing itself. As it stands, it is easy to portray the incident as that of a US citizen working with a foreign government to attack the strategic position and credibility of the United States government. Regardless of the purity of his intentions, this does not look good and weakens the stance he took in releasing the information. Edward Snowden should have returned to the US and accepted the punishment for his crime. This is how civil disobedience works. Unjust laws are changed when citizens fight them from within, not when they leave the country to avoid the consequences of their actions. By fleeing to Russia, Snowden became a traitor not only to the United States, but to his own cause.

I am not saying that I would have had the courage to accept the consequences for blowing the whistle on the NSA. I am not saying that I would have had the courage to blow the whistle in the first place. What I am saying is blowing the whistle was the ethically and morally right thing to do and accepting the punishment for doing so would have lent more credibility to that action. The public is better off for knowing the extent of the United States government’s surveillance programs because we can now demand that those in power have respect for our personal privacy, and until that happens we have the power to protect ourselves through encryption and being mindful of information shared digitally. Edward Snowden gave us that power.

Reading 05: On Chelsea Manning

The Chelsea Manning story is extremely complicated. I do not think it is clear who acted rightly and wrongly in this situation. The main issue I have in examining this issue from an ethical standpoint is that her goals in leaking classified data to WikiLeaks and the outcomes of those leaks seem ambiguous. From what I can tell, none of the information released was really all that revelatory, and the story of the leak itself seemed bigger than anything that came of it. Several articles mention that the leak could not be directly linked to any impact on the US war effort, and as far as I can tell, it did not seem to have any affect on popular opinion regarding the war. None of the articles seemed to agree on a single, clear motive for Manning’s actions.

One aspect of this case that does seem clear is that Manning’s 35-year sentence was extreme and unjust. Several articles mentioned this much more severe than is typical for a conviction for leaking classified information or even for military crimes. I suspect this was because of the large media response to the leaks as well as Manning’s relatively low military status – a perfect case to send a message to any would-be leakers. I think President Obama did the right thing by commuting this overly harsh sentence. That being said, I do not think Chelsea Manning was unjustly convicted. There should be consequences for people who leak information related to national security. I think this personal consequence gives whistleblowers more reason to consider whether their actions are truly in the public interest.

This is where I have trouble with Chelsea Manning. I do not know if or why she thought releasing this information would be in the public interest. To go break one’s commitment to the US military and risk the consequences is a huge decision, but none of the articles seem to adequately explain Manning’s motive. The original Wired article on the case quotes Manning as having written “Everywhere there’s a U.S. post, there’s a diplomatic scandal that will be revealed… It’s open diplomacy. World-wide anarchy in CSV format. It’s Climategate with a global scope, and breathtaking depth. It’s beautiful, and horrifying.” This seems like releasing secrets for the sake of releasing secrets, which on some level I can understand, but it seems like a weak case on which to stake one’s life and freedom.

One good thing that did come of the Chelsea Manning leak is the start of a national conversation on government secrets. I think if this conversation was her intention in releasing the classified information then it could be argued that she acted ethically. This story brought our relationship to the US government and the secrets it keeps into the public discourse and likely helped pave the way for later whistleblowers like Edward Snowden. No organization deserves the trust of its members if it does not act transparently whenever possible. There are some matters that must be kept secret in the interest of public security, but by and large citizens have a right to know how their government operates.

Reading 04: Diversity in Technology

The lack of diversity in technology is a real and severe problem. The NCWIT article makes a compelling argument for the fact that the lack of gender diversity specifically is not a result of biological differences and that the discussion of it as such gives undue legitimacy to the argument. Any perceived difference is the ability of men and women or of people of different races is systematic and cultural – not a result of biology. Last week an article from the BBC described the slower pace of business in Rome compared to that of American cities and a classmate shared experience with this phenomenon. No one is discussing how Italians are biologically wired to work less than Americans, and if someone were to make this argument, he or she would be rightfully be dismissed as racist. The real obstacles to women and minorities in technology are those described in this week’s articles: a relative lack of resources and encouragement from a young age, differing societal pressures and expectations, toxic work and learning cultures, and people like James Damore at Google making excuses for it all.

The lack of diversity in technology is a problem. I can understand why someone might think it is okay for women or minorities to disproportionately pursue other careers if that is what they are passionate about, but today we are all stakeholders in the tech industry. Software is eating the world – and almost all of it is being developed by a bunch of white guys on the west coast. Everything from our means of transportation to our personal relationships is touched and shaped by services such as Uber and Facebook. If the teams behind these products do not reflect the diverse makeup of their users then the products will inevitably at best have blind spots for those users’ experiences and at worst perpetuate the toxic and exclusive cultures from which they come.

I don’t know what we can do to help fix this problem. One place for companies to start would be to continue pursuing programs that encourage diversity and to fire employees for harassment like Uber failed to do as described in Susan Fowler’s blog post. The structural and societal challenges are harder to address. I hope solutions to these will come, but if they are coming, they are a long way off. It’s going to take a lot more women and minorities in positions of power and a fundamental shift in culture the world over to fix these problems, and that is not going to happen overnight.

One thing we cannot do is give undue attention to the people who seek to perpetuate the white-male-dominated culture. It may seem counter-intuitive, but a tolerant culture cannot tolerate intolerance. I think Google made the right move by firing James Damore. To have done anything else would be to implicitly support his message. I don’t doubt that his heart might have been in the right place, but by appealing to gender stereotypes and justifying those stereotypes by appealing to biology, he had a part in reinforcing an unwelcoming culture that should have no place anywhere. If we are going to have open, honest discussions about diversity in technology, it needs to be rooted in a basic understanding that everyone deserves respect and owes respect to everyone else.