Reading 14: A math major and an art major walk into a bar…

I do not think coding is the new literacy. I think coding is an incredibly useful skill to have, and I think computational thinking is important to learn, but I do not see if being a necessary subject in all schools, at least in the academic set up we have currently. If computer science becomes a mandatory class I think we will have to rethink our entire education system, and what we view as “mandatory.” I do not think that computational thinking is more important than creative thinking, and we live in a world right now where funding for creative classes is being cut. I do, however, believe that computer science classes, or other classes that promote computational thinking, should be made more accessible to all students, and that we should make it a more desirable class to take than it is currently being sold as.

In my high school, most people did not even know we offered a computer science class. The class was taken by a few guys who were already fairly familiar with computers, in a hidden computer lab, and it did not count for anything more than an elective. With all of the mandatory classes I had to take to graduate along with classes I was “encouraged” to take in order to get into the colleges I wanted, I had no time for an extra elective that would not count towards a specific requirement. I understand that all schools and states have different requirements and different amounts of classes you can take, but in my instance there simply would not have been time for me to take that class. By allowing students to take the class to count towards a requirement, or making the class a more welcoming environment for all students (despite gender, race, clique, or familiarity with the topic), we can at least make computer science a more accessible skill to learn for those who may be interested.

One way computer science could become more accessible or attractive is including it as a part of the already required computer classes in elementary school. I was required to take a computer class in from K-8. We learned how to type, use word, powerpoint, excel, google efficiently, and spent a lot of time in Microsoft paint and “Kid pix.” Now kids are exposed to more technology, and it may be worth teaching them some basic computational thinking or coding techniques at that young age. Then the kids can decide for themselves if that is a skill they would like to enhance and continue taking classes in, just like the do with art, music, gym, etc. I think computational thinking is important to learn young, like creative thinking, but not everyone is meant to love to code, just how not everyone loves playing an instrument or creating artistic masterpieces.

I have a math mind, thats why I majored in it. I think about things logically and developed computational thinking in my advanced math and science classes I took. So in my head, it feels like everyone could learn to program. But then I think about my sister who has an incredibly artistic mind. She hated math and loved art. That is why she pursued an art degree. I think if someone tried to teach her how to code she could catch on to certain things because she is a very smart person, but I don’t think she would enjoy it the same way I do. Her and I have gone to painting classes together and I get frustrated at times because it comes so naturally to her, and mine looks slightly better than the elementary schoolers she teaches. I think everyone has their skills, and their passions and they should be free to explore those. This is something that is wrong with our educational system as a whole right now. Everyone can learn math, they can learn to read, they can learn to draw, and they can learn to code. But not everyone is good at all those things, and not everyone is passionate about all those things.

Reading 12: We’re Cruisin.

I can see the motivation for building self-driving cars. It is all about innovation. Everyone wants to be the first person to create “the next big thing.” Although many companies are working on this problem, and some have even put cars on the road, no one has perfected it yet. Driving a car can be a very dangerous thing. The first person to perfect a vehicle that can drive itself and avoid the risk of human error will be very rich. One thing I think is interesting is that we do not yet know if self driving cars will actually be safer, we just know that human drivers are known to cause accidents and that self driving cars “might” eliminate or decrease the number of danger accidents. 

Aside from the possibility of being safer, another pro to self driving cars is that they will allow the “driver” to become a passenger giving them more time to do other things. Imagine having an hour commute to and from work everyday. That means a typical 8 hour work day turns into a 10 hour day. Now imagine being able to work while in your car on the way to the office. If you could start working on your commute, and work on your way back home, you could cut your once 10 hour day back to an 8 hour day and still get all of your work done. Another pro would be that other self driving cars could communicate with your car. Imagine knowing exactly what the car in front of you is going to do! That could help you determine which lane you want to be in to make your commute faster. Other pros include eliminating “driver distractions” and drunk driving, being able to increase speed limits, and improve heavy traffic.

Important cons of self driving cars include cost, if this will actually increase safety, determining who is actually at fault in case of an accident, and whether the sensors and cameras will actually be effective on all roads in all conditions. People also fear loosing the ability to actually drive a car.  What if something malfunctions and the car must default to manual control, and the driver does not actually know how to drive? Also, some people actually do enjoy driving cars. Probably the biggest concern with self-driving cars is the “social dilemma.” Who is at fault, and what should the car do in a life-and-death situation? Many people use the trolley problem to discuss the morality of autonomous cars. Another way to think about this is if your car is driving next to a cliff and it has the option to drive off the cliff or run into a child, what should it do? What if there is also a child in your car? How do we value one persons live over another. As a human driver, often our natural reaction in an accident is to save ourselves. Often we do not have time to consider the value of the other persons life over our own before a collision occurs. But what does that mean for a computer. And who is liable when this occurs?

I honestly do not have a great answer to this problem. Every human is different, and every human is going  to have a different opinion. It is impossible to make a “perfect” self driving car that reflects the moral behavior of every human because we will not all agree on the answer to the trolley problem. We also will not all agree on who is liable. The software engineers are trying to “mimic” human reactions. They are just doing their jobs, and not actually the computer themselves, so how can they be liable? The driver isn’t really in control, so how can they be liable? In the most recent case where an uber killed a pedestrian in Arizona, police are putting the blame on the victim, saying this situation would have been difficult to avoid even if the car was not autonomous. But what if the pedestrian was not a pedestrian and instead another self driving car? Then who can they throw the blame on?

Personally, I am not interested in owning a totally self driving car at this time. I think some of the automatous features are cool, like the car being able to parallel park on its own, or braking for you to avoid an accident, or even making sure your car stays in its lane. But I am not ready to give total control of my vehicle to a computer. In my opinion cars are very dangerous things. I am a tiny human in total control of something that weighs multiple tons and I take it very seriously. For now, I feel safer being in control of the car, than trusting a computer to be in control. Maybe some day the technology will advance enough and there will be enough proof that giving control to a computer is actually safer, but for now I do not have enough evidence the trust a computer to totally drive my car.

Reading 11: “Elementary My Dear Watson”

Artificial Intelligence is a thing people study to try and make computers be intelligent or rather be able to think, learn, decide, understand, etc.  the way humans do. Strong AI would be totally simulating the way a human thinks. Weak AI is more aimed at doing one particular thing similar to how humans might. AI is similar to human intelligence in that we are trying to simulate exactly that, human intelligence. What makes it different is right in the name: Artificial. This “intelligence” is not natural. We as humans can tell a computer what to do and how to make decisions. We can tell it where to look to learn new things. We can inform it how to interact with other humans. We can even make it look like a human. But none of it is natural or really real. In many cases of AI that exist today, the machine is doing what the developer tells is to do and learning what the developer tells it to learn.

Things like AlphaGo, AlphaZero, and Deep Blue seem to be more weak AI if AI at all. When “teaching” a computer how to play a game, you are teaching it how to analyze data and choose a “move” based on the data’s outcome. A machine will perform better the more games it plays, just like a human will perform better the more games they play, however the way they learn is very different. A machine does not require sleep or food. They have one job and it is to play the game for 24 hours a day, 7 days a week, and learn everything it can. It also can easily check the outcomes of certain moves from past experiences and remember winning percentages of certain moves. This last part is something a human does not do when playing a game. When a human plays a game they have to go with their instincts. They have to consider their options and trust their gut they are making the right decision. This could mean they choose a safe move over a risky move or vice versa. A human can take things into account that a computer may not consider, like if they have played the same person many times and they know they will make safer choices or if they feel really lucky that day. There are certain instincts and signals that a human can pick up on, that even math cannot predict.

Watson is a little different because it is more open ended than just playing a game. Watson was created to be able to answer questions. This seems a bit more complicated because questions on Jeopardy can cover a wide variety of topics, and now Watson is mainly used in the medical field. Watson has been trained to do many different things. This seems like more of a human-like situation. Reading about some of the errors that Watson made in his rounds of jeopardy just remind me that Watson is not human. He made mistakes that a human would not make, like answering “1920’s” after another contestant already guessed “the 20’s.” There are certain common sense things I do not think we can teach computers.

I am a fan of the Chinese Room counter argument to the Turing Test. I agree that a machine that is merely “simulating the ability to speak Chinese,” is not really understanding Chinese, and rather outputting what a developer’s algorithm thinks it should output. Can we really say a machine is intelligent like a human if it cannot fully understand what it is saying? One good counter argument to the Chinese Room test would be that  the definition of artificial is “made or produced by human beings rather than occurring naturally, typically as a copy of something natural; insincere.” This would mean that Artificial intelligence is indeed simulating human intelligence, not necessarily having human intelligence. However, this is not the way that many people perceive AI.

I am more concerned about the intentions of those who create AI than I am about AI itself. The computer responds in the way the developer has told it to respond. The people behind the AI need to be thoughtful in what exactly they are telling the computer to do. For this reason I find it hard to believe that a computing system can be thought to have a mind of its own. A human can learn things from other people and be taught how to do things, but we have the ability to make our own decisions and judge if something is right or wrong, or if we want to do it. We can decide if we want to learn the things we are being taught. A computer does not have these capabilities (at least for now…).

Reading 09: Net Neutrality. We are fighting the same fight.

Net Neutrality is the what keeps the internet neutral. By that, I mean it implies that all internet data should be accessible to all despite the internet service provider they are paying. Net Neutrality keeps internet service providers from favoring certain websites and purposely slowing down or blocking others. It also could mean that companies could block you from using VPN’s or certain wifi routers. Net Neutrality is in the best interest of web based companies who depend on their customers being able to access their websites. It is not in the best interest of internet providers who could potentially strike deals with certain companies or get more money from customers by charging them for access to certain things.

People argue that Net Neutrality hinders innovation. Without net neutrality big companies can take total control over your internet use. They can decide what websites you can and cannot access and what websites load faster than others. They can charge users for higher speeds and better experiences. They can make it impossible for new competitors to have a fair shot at gaining exposure to new customers. They can stifle others opinions by blocking opposing views and content. Net Neutrality makes for an open internet for all ensuring fair competition for all companies new and old. People who are argue against Net Neutrality believe it is “another political tool used to reward select groups at the expense of others” according to Being Libertarian.com. They argue that without Net Neutrality, companies like Time Warner, Comcast, and AT&T could charge the larger companies (like Netflix, Google, etc.) more than smaller companies because they use more of the internet services. These people believe that Net Neutrality is more government regulations in a place where government regulations do not need to be.

I think both arguments have valid points, but neither really have answers to each other concerns. Both sides argue that Net neutrality or no net neutrality is bad for smaller businesses. One side thinks that giving big businesses control will leave the little business in the dust. The other side argues that if big businesses had control they could charge the larger companies more money than smaller companies because they use more of the services and broadband. Both sides of the argument think they can fix the same problem by doing exactly opposite things. This topic is incredibly hard to take a stance on, because I am not sure that either side is entirely right and with the political climate right now its either pick a side and support it fully, or your opinion doesn’t matter. If both sides realized they are fighting the same battles, maybe they could come together to solve those problems, but instead both sides think they are right, and only they are right, and the other side’s opinions are invalid and wrong.

I agree that maybe companies that use more bandwidth should have to pay more for their services. The analogy from Being Libertarian.com about wear and tear on roads between cars and 18-wheelers really puts the problem into perspective. But I also agree that maybe some regulations do need to be put into place in order to prevent the other bad things that no Net Neutrality could create. So I guess you could say I am in favor of SOME net neutrality, but with limits on regulation.

If I had to implement Net Neutrality, I would focus on ensuring fair competition exists for all companies. If you use more resources, you should pay more. That is a valid argument from the no Net Neutrality side. But I also do not think companies should be able to limit the customer’s experience, or exclude the option for new companies to compete. I also agree with the no Net Neutrality side that too many regulations can be bad, and complicate things. I think the regulations should be more about ensuring fair competition (already an important staple of our economy) and ensuring that customers have their basic rights (freedom to browse the internet).

Reading 08: Hi, my name is [insert company name]

Corporate personhood is the idea that a corporation is treated as a person, or can act as a person. It is a complicated idea, and I agree and disagree with certain parts of this idea. I think we should think of a corporation as the people who work for it, instead of as physical thing or individual. The problem with this idea, is that when a company is celebrated, everyone wants to be recognized, but when a company is punished, no one wants to take responsibility, so the finger is pointed at “the company” instead of individual people who work at the company. No one person wants to take responsibility of things like property, taxes, and expenses, so this is all put on “the company.” There are responsibilities that “the company” has that I understand a single person should not be responsible for. It does not make sense for the physical property of “the company” to be tied to a single human. However, since “the company” is given these human-like abilities people start to think of it as a real person, and we start to give this person rights it really does not need. One particular right I do not think is fair is the ability for a company to back a presidential candidate. That means that “the company” is speaking for all the people who work there. The money that those people worked hard for is being used to back a candidate that they may not actually support. The beliefs of a few chairmen should not be put on the people of the entire company. If those people want to give their own money, great! But money should not be given under the name of an entire corporation if the entire corporation does not necessarily agree.

I think what sony did with the rootkit was unethical. They should not have the ability to download software to someone’s computer without that person knowing it is being installed. Upon reading more about this software, I discovered it continuously uses CPU on the computer, and it opened up malware vulnerabilities. Sony installed software, made the computers vulnerable, and took up precious space, all without the user’s knowledge or approval. In one of the articles about Sony, after discussing what one of the presidents at Sony said, they said “Even Sony’s apology…” This seems wrong to me because the company is not what it is apologizing. It is the people within the company who made this decision that should be doing the apologizing.  Sony paid money in lawsuits for their actions, but were the individuals that started the problem punished? “The company” was punished, but the individuals responsible did not have to be held accountable. This allows people to hide behind the “person” that is “the company.”

If corporations are afforded rights like a person, they should also be ethical and moral. That means the real people behind “the company’s” choices need to be ethical and moral. “The company” cannot make decisions or do things. The people behind “the company” should not be able to hide behind “the company’s” name to make decisions they might not make as an individual. People should be held responsible for their actions, not an invisible “person” that goes by the name of the company. In the case of Sony, people made the decision to install this software on people’s computers, they made the decision to cloak what they were doing, they made the decision to create the software in the first place. People made these decisions and should be responsible. When you blame something on a company, the individuals who made the decisions get away free of blame, and free of discipline.

Reading 07: “Mathematicians are suddenly sexy”

The title of this blog post is a quote from the former chief scientist at Amazon.com, Andreas Weigend. He was discussing the “arms race to hire statisticians.” As a math major, I felt particularly drawn to this statement, but putting my personal bias aside, it is an interesting point. Math has never been a particularly “sexy” thing to study. The ability to analyze all of the data companies are collecting is a high demand skill right now, and is definitely making math a more attractive major, but the question is whether or not this new “sexiness” is ethical or not. 

The amount of data companies are collecting has become a little absurd. This is where they walk the line of ethical or not. When a company responsibly collects data that I have given them permission to collect in order to better my experience using a particular app or website, I believe it is an ethical data collection. When uber provides me with a list of frequently ubered locations to give me the ability to fast track my uber ordering experience, it is a responsible data collection. I am willingly giving them that data by using the app, I understand the use case of this feature, and I agree that it is bettering my experience. If a user willingly sends data to an individual company, I do not see a problem with that company using that data in order to better the user experience. The two most unethical parts of this problem, in my opinion, are when the data is used for reasons that do not better the user experience, and when data is collected in a way you do not give them permission to access.

If I do not give permission for a company to use my location or my camera or my microphone, then they should not be able to use them. Period. If I do give them permission to use any of these features, then they should be used responsibly, and I should be notified of any reasons these features will be used. If a company tells me they want access to my microphone in order to record a video, I expect that is the only reason they want it. Unless a company is straightforward in telling me that they are going to use a feature to collect extra data, like what I am talking to my friends about, then it should not be allowed. That is spying, and makes me uncomfortable. This is often used in online advertisements, and it is startling when it is. The other day a friend sent me a snapchat of a suitcase. I opened instagram and there was an ad for that same suitcase. Could this have been a coincidence? Maybe. Was it super creepy? Yes. I do not know how instagram got this data in order to show me this ad, but it felt invasive.

Any of MY data collected by a company, should be used to better MY experience. In the article “The Convenience-Surveillance Tradeoff” they discuss a survey where they asked adults about certain situations and if they are comfortable with data about them being collected and stored in these situations. One of the situations was a “smart-thermostat” company, that collected data about someone’s movement around their house and “offer no-cost remote programability in exchange for this data.” In this situation the company would need to be straight forward with buyers of this product about what their data is being used for. If data is being used to help the user save money on heating bills or alter temperature in certain rooms based on where the user is, then great! However personally, I would be creeped out if suddenly I had an influx of ads for kitchen things because I spent a lot of time in the kitchen, and this use of my data was not made clear to me. If someone willingly gives data with the understanding that it will be used in a certain way, great! But if I am being sold as data with no knowledge of what MY data is going towards, that is bad.

I find some online advertising invasive, and some tolerable. Once again, I think this is subject to whether I am aware of my data being used, and if I am aware of what data is being used for what. A bad example is the snapchat-suitcase example given above. A good use of my data for online advertising, is when I am looking for presents. An example is I was looking for a men’s bathrobe for a present, so I did a lot of googling and searching on different sites. It was helpful to me and a better experience for me, that google then gave me ads for men’s bathrobes, because it helped my search. This is an example of me willingly giving up information about myself to a company (telling google i am interested in a bathrobe), and them using it in their own context (not within another app or website), to better my experience as a user (by showing me better deals on something they knew I was looking for).

Reading 06: Edward “The Hypocrite” Snowden

Edward Snowden leaked over a million classified documents from the NSA to media personnel. He had access to these documents because he was a contractor for the CIA through the consulting firm Booz Allen Hamilton. He has sought refuge in Russia, and is hoping to gain a presidential pardon for his actions because he believes “that the disclosure of the scale of surveillance by US and British intelligence agencies was not only morally right but had left citizens better off.” In my opinion, he should not be pardoned. Edward Snowden was incredibly careless and selfish in the release of these documents and did not fully think through the implications to his actions. There are probably much better ways for him to make his point.

I understand his want to make us aware of the internal privacy rights that he felt were being violated. Unfortunately however, I think he hurt his own case with his execution. He was trying to let Americans know the extent that government agencies were encroaching on our privacy. However, as The Washington Post Article “Edward Snowden’s impact” states, the amount of information he provided was overwhelming in that it did not give people the chance to react “appropriately” (react how Snowden wanted). Although there were a few things reformed due to the information leak and people’s anger, I do not think it had the overall impact that Snowden wanted. Also, it was reported that Snowden did not even read all the documents that he leaked before releasing them. Because the NSA deals with such sensitive data, it seems very irresponsible to release documents from the NSA without first making sure you are not releasing very private information about individuals. Snowden is trying to argue that the NSA should not be collecting this data and we should have more privacy, so imagine if Snowden accidentally released some of this very private data about American citizens to the media. That would be accomplishing the exact opposite of the cause Snowden is fighting for.

Also, Snowden released many reports of foreign affairs. This is incredibly irresponsible because if the rest of the world did not already hate America, they sure do now. I was not surprised at all by the reports about domestic privacy issues. The National Security Agency is trying to insure security, and I am not surprised at all by their level of surveillance. Whether it is right or not, I am not surprised. However the information regarding foreign security breaches are very specific and in some cases a little surprising. However, this only infuriates other countries more, and could potentially lead to some sort of war. Countries spy on each other. I think this is something people know, but releasing official documentation confirming exactly who they are spying on and how is a great way to make even more enemies than we already have and to really screw over your own country. I would not be surprised if all the countries we have documentation of spying on are spying on us as well, but actually putting that information out there, and letting them know exactly what we are doing is a huge threat to our nations security, and makes me feel even more unsafe. Snowden’s irresponsible release of this information makes me scared for my safety, and not because the US government is spying on me, but because now other countries know.

Personally, I feel a lot safer knowing that the government is actively searching for terrorism. I would rather someone sitting in an NSA office listen to me tell my mom about my day and what I ate for lunch than be attacked by a domestic terrorist that could have been be stopped. I also feel a lot safer knowing that other countries do not know the extent to which our government spies. But unfortunately, we now live in a world where other countries can hate my country because they know exactly how they are spying on them, even though I am sure those countries are spying on the USA too. There are tradeoffs to safety that I accept.

Project 02: Job Interview Process Reflection

The most important part of our guide, from my experience, is the section on interviewing. I had no idea what to expect when going into interviews for the first time and it would have been nice to have some understanding of how to prepare. It’s hard to tell someone exactly what to do to prepare for an interview because everyone prepares differently, just like everyone studies for exams differently. But it is nice to have some pointers of places to look for sample questions and an idea of what to expect when I step in the room. I actually did not find out about Hacker Rank until working on this project and I think that would have been an incredibly useful tool when preparing for interviews. The best advice I’ve ever received about the job interview process is that if you do not end up with an interview or an offer, you should not be discouraged. If anything it means you would not have been a good fit at that company which would have ultimately made you unhappy. If you do not receive an offer it means that is not where you are meant to be right now, and there is a better opportunity for you somewhere else.

I do not think that colleges should necessarily change their curriculum to prepare us more for the job interview process. I think if you pay attention in your data structures and algorithm classes you should probably be prepared enough. I prepare for interviews very differently than anyone I’ve ever talked to. Most people do a bunch of practice problems and read entire books on interviewing. I instead do the push ups of learning the concepts in my classes and then doing a quick review before an interview so they are fresh in my memory. I think if you know your stuff and you know your field, you should be prepared.

If I had to change the ND CSE program to better prepare students for the work force, the one thing I would add is testing. Some sort of explanation or practice testing would have prepared me a lot for the internships I had. It was one of the big components that my company looked at when it came to return offers. If I had to change the ND CSE program to better prepare students for interviews, I feel like I would maybe have one optional class, that students could take if they want to have a better understanding of the interview process. Even if it was like a 1 credit optional course that met once a week, at least having an open discussion about people’s experiences and possible problems would be very helpful.

Reading05: Whistleblower or Hero?

The Challenger disaster was absolutely something that could have been avoided. The physical cause of the malfunction was a failed seal on a rocket booster cause by erosion on an O ring seal. The real root cause, in my opinion, is a failure of communication and terrible decision making.

In “How Challenger Exploded, and Other Mistakes Were Made,” we are provided examples of data and graphs provided to NASA by the engineers. I find the data and graphs hard to read and understand. I know nothing about O-rings, but I can see how the data provided may be unclear. However, despite the poorly constructed data, NASA should have listened more closely to the engineers, and the engineers should have pushed harder with their opinions. Also, whether the engineers shared easily readable data or not, NASA was informed that the O-ring would probably fail, and that should have been information enough to delay take off. NASA was warned about the major malfunction, and a NASA manager’s response was “My God, Thiokol, when do you want me to launch — next April?” The response to that statement should have been yes, wait to launch until it is safe. It is also shared that opposite to normal protocol, NASA requested the engineers try “to prove beyond a shadow of a doubt that it was not safe” to launch. NASA should have been trying to protect the astronauts on board, not trying to launch no matter what.

I think Roger Boisjoly was ethical in sharing the information with the public. It should have acted as a push for NASA to do better in the future. Instead of being a situation for NASA to point fingers, it should have been taken as a lesson in better communication and a lesson in better, more thoughtful decision making. However, NASA learned nothing from Roger Boisjoly’s warning. Instead the same thing happened seventeen years later when the Columbia malfunctioned on re-entry and killed another 7 astronauts. This was another malfunction that could have been avoided and lives could have been saved. In this case, NASA managers and Thiokol managers were in the wrong by not sharing what they knew and taking responsibility for the deaths of 7 people. Roger Boisjoly should have been a hero. His opinion on his creation should have been taken more seriously and lives should have been saved. His sharing of information on the malfunction should have been a huge lesson for NASA saving lives in the future, but it was not. Instead, poor Roger Boisjoly was shunned from doing what he loved, for standing up for what he loved, and his career, life, and everything were destroyed.

The culture of “whistleblowing”has unfortunately  discouraged stepping up, speaking out, admitting fault, and making redesigns.” The way whistleblowing is handled makes companies less trustworthy. When faced with a serious problem that could have been avoided at General Motors, the CEO of GM said “If you are aware of a potential problem affecting safety or quality and you don’t speak up, you’re part of the problem, and that is not acceptable. If you see a problem you don’t believe is being handled correctly, elevate it to your supervisor. If you still don’t believe it’s being handled correctly, contact me directly.” However, an investigation from BusinessWeek “detailed the repeated, failed attempts of one internal whistleblower to fix the problem.” It makes me feel like I can not trust these companies and their products. If they are going to “go out of their way to hide [problems] and fight the people who expose them,” then why should I trust that their products are not going to malfunction and hurt me.

It seems to me that punishing whistleblowers who have the intent to better the company is wrong. I do agree that some people whistle blow for the wrong reasons, and these situations do require repercussions, however some whistleblowers, like Boisjoly, have a good intent, and should be able to continue to live their lives. In “How Challenger Exploded, and Other Mistakes Were Made,” it is said:

The trick is knowing which errors must be addressed and which can be accepted, and which are being accepted simply because we fail to see how dangerous they are.

I think some whistleblowers like Boisjoly truly are trying to make the world a better place, because some errors are being wrongly accepted as “risk.” Being an astronaut is a risky thing. The 14 astronauts killed in both Challenger and Columbia probably understood that there was a chance of death, but the were also probably under the impression that NASA had the best intentions to get them home safely. After hearing the accounts of Boisjoly, I would not have trusted NASA with my life. NASA should have learned.

Reading 04: Digging a Deep Hole of Bias

I believe the lack of diversity is a huge problem in the technology industry. Diversity brings a variety of view points and ways of thinking. I believe this diversity of opinions and thoughts helps make better more well-rounded products and software. Not only is there a lack of diversity in the technology industry, there is also a lack of compassion for anyone who thinks differently. This makes for uncomfortable or unwelcoming work environments.

It is unfortunate that the technology industry took such a turn to “brogrammer” culture, because it makes the industry so unwelcoming to people who are “different.” It was not always this way. In the article “Why Women Stopped Coding,” it is pointed out that”A lot of computing pioneers — the people who programmed the first digital computers — were women.” On top of being the original coders, the percentage of women in technology was on the rise until the 1980’s when the personal computer came out. The early personal computers popularized the idea that “computers are for boys.” This seems to be the start of the downfall of the percentage of women in computing. Computers were only being bought for boys, and “as personal computers became more common, computer science professors increasingly assumed that their students had grown up playing with computers at home.” In my opinion this created the exclusivity of the technology industry. I felt this pressure to already have a basic understand of coding and technology when I took my first computer science class, and it definitely felt exclusive.

The unwelcoming workplace statistics provided in the article “A new survey explains one big reason there are so few women in technology,” are both alarming and unsurprising at the same time. Same with the fiasco that happened at Uber. Some of the statistics and stories told are very alarming and would absolutely deter women from even entering the industry. To read that 60% of women in a survey of women in technology reported sexual harassment, is shocking because that is two times the national average. However I was actually very unsurprised by a lot of the stories a read because they are similar to things I have already experienced and I have not even worked in the technology industry full time yet. This summer I worked at a large tech company and was the only girl in my friend group of interns. I was constantly made fun of or talked down to for being a woman, whether they realized they were doing it or not. At the Grace Hopper Conference, I listened to the story of a Black woman who was the CEO and founder of a company, that was eventually kicked out of her position when she was pregnant because “It’s already hard enough having a black woman as CEO, let alone a pregnant black woman.” It’s this type of culture that made me decide I could not live in a city like Silicon Valley where the majority of the population is computer scientists and “brogrammers.”

I need to live in a city not made up of “brogrammers” because I need to be surrounded by people who think differently. That is what the technology industry is missing, people who think differently. In an interview, Melinda Gates discussed when she made the decision to just be herself:

 And I started to learn that being myself could work. By then, I was a manager and I ended up inadvertently attracting huge teams around me who wanted to act in the same way. And people would even say to me, “How in the world did you recruit that amazing programmer to one of your teams?” and I would say, well I think they just want to work in this type of environment.

Melinda Gates detailed that she attracted the top talent by forming teams of people who did not want the competitive nature of other teams. People want to be themselves, and they want to be surrounded by other creative people who think differently. By making diverse, uncompetitive teams, we can make a much more welcoming environment for everyone, not just women and minorities, and this can help make the workplace more fun and less stressful.

The lack of diversity that currently exists in technology, is bad for business, and bad for the future of technology. When asked about the risks of not diversifying technology, Melinda Gates says:

I think we’ll have so much hidden bias coded into the system that we won’t even realize all the places that we have it. If you don’t have a diverse workforce programming artificial intelligence and thinking about the data sets to feed in, and how to look at a particular program, you’re going to have so much bias in the system, you’re going to have a hard time rolling it back later or taking it out.

The longer we wait to make the workplace comfortable for diverse individuals, the more technology suffers from bias and lack of diversity, and the deeper we dig a hole that we will not be able to get out of. Women and minorities bring different points of view and different ways to solve problems. It also can be argued that women bring a level of compassion to their work that men might not. Compassion can make for a more meaningful experience not only for the user of the product, but also make it easier for future computer scientists who work on the product.