Unit 5. Data ManipulationRevision Date: Jan 12, 2020 (Version 3.0)
Big Data has been defined in many different ways. Easy access to large sets of data and the ability to analyze large data sets changes how people make decisions. Students will explore how Big Data can be used to solve real-world problems in their community. After watching a video that explains how Big Data is different from how we have analyzed and used data in the past, students will explore Big Data techniques in online simulations. Students will identify appropriate data source(s) and formulate solvable questions.
Students are introduced to parallel and distributed programming and to their use with big data.
Journal: How can a computer gather data from people? (Think-Pair-Share)
Discuss: How can the computer learn from people when playing one of these games? How many different answers do you think it could possibly know?
Teacher note: students are not expected to actually play this game during class.
Read The Rise of Big Data in chunks: An Introduction to “Big Data” (20 mins) Reading can be found at: http://www.foreignaffairs.com/articles/139104/kenneth-neil-cukier-and-viktor-mayer-schoenberger/the-rise-of-big-data
Show the first 3-5 minutes of this clip. (It becomes a bit dry, so just show the amount that is appropriate for your students to get the idea): https://www.youtube.com/watch?v=7D1CQ_LOizA
Some examples of how big data is used appropriately:
Some examples of how big data was inappropriately used:
Students are to pick three topics they want to research that use big data. It is preferred that these topics relate to something learned this year in the course (e.g., the need for IPv6). Tomorrow, as the students enter class, they will sign up on a list with their chosen topic. Since the students will have three options, it is likely they will get one of their selected topics to research.
Journal: Think about your daily and weekly activities. What types of data are being stored about you?
Remind students to think about what they do online, in stores, while in a car, etc.
Review the steps to processing Big Data:
Demonstrate how files such as these can be obtained at http://catalog.data.gov/dataset
Formulate questions such as:
Extract data source into a format supported by underlying tools
Open one of these files in Notepad (or some simple editing program such as Notepad++) and demonstrate how the actual data itself is separated by commas, thus the file name “csv” for comma-separated value.
Open both files in Microsoft Excel. Complete a find for the bank name “Banco Popular de Puerto Rico” on both lists. You may want to first sort the data by bank name to find this bank or you can use CTRL + F to find the bank name (see screenshots below).
Steps 3 & 4.
Normalize data (remove redundancies, irrelevant details)
In this step, there is technically no need to remove redundancies or irrelevant details but you can show the students how you could remove data or limit the data to a particular data set. For example, if were to want to look at only the banks from Maryland, you can use the filter tool to only view those banks from MD.
Make data uniform (user-entered data may include abbreviations, spelling errors, or inconsistent capitalization without changing the meaning of the data)
Cleaning data: Depending on how the data was collected, it may not be uniform. Therefore the data may need to be cleaned before it can be processed. Cleaning the data is the process that makes the data uniform without changing its meaning. For example, replacing all equivalent abbreviations, spellings, and capitalizations of the same word.
Import data into the tool
Right now the file type is as a csv file. By resaving the file as a .xlsx file it becomes a true spreadsheet file.
We have determined that the bank “Banco Popular de Puerto Rico” is on both lists. Now ask the students “Why is this bank on both lists?” Note: On the Failed Bank list the Banco Popular de Puerto Rico is actually an acquiring institution. By looking more closely at the dates of the acquisition of the failed bank “Westernbank Puerto Rico” one can formulate some possible deductions that maybe the reason “Banco Popular de Puerto Rico” is on the complaint list is because they had recently taken over a failed bank. It could be possible that some of these complaints were related to this recent acquisition.
Explain to students that they will learn more about visualizing their results in Unit 6. They can complete graph visualization in excel. Show them the website: http://www.gapminder.org/. Explain that even though a visualization in excel is not interactive like http://www.gapminder.org/, they can complete some form of visualizing their data by using a spreadsheet. Note: http://www.gapminder.org/ is VERY attention-grabbing. Only briefly show the students what they can do with it (see how data changes over time, look at many different data sets and download data in different forms - including csv and xlsx formats).
Students should research their selected topics from homework. Some possible websites for finding data are listed above under “Possible good resource(s) for data collection.”
Students are to review using http://www.gapminder.org/ looking specifically at life expectancy. Students will write one question after “playing” the timeline of life expectancy using gapminder on an exit slip before leaving class. For example, one may write “Why is the life expectancy of countries such as Denmark, Sweden, & Norway typically higher than other countries throughout most of the timeline?”
Challenge students to work in teams of four to find:
After 2 min stop students and ask them to share how they approached the problem.
Say: You split the task into pieces, and each person worked at the same time to get the job done more quickly than would be possible by yourself. This is parallelism. In computing, parallelism can be defined as the use of multiple processing units working together to complete some task. There are many different kinds of hardware that can serve as a “processing unit”, but the principle is the same: a task is broken into pieces in some way, and the processing units cooperate on those pieces to get the task done. Basic of Processes with Python
Students launch a Python 2 IDE such repl.it or JDoodle and paste the following code adapted from the Basics of Processes with Python web page.
from multiprocessing import *
print "Hi from process", current_process().pid
print "Hi from process", current_process().pid, "(parent process)"
otherProcess1 = Process(target=sayHi, args=())
otherProcess2 = Process(target=sayHi, args=())
otherProcess3 = Process(target=sayHi, args=())
Students execute the code - and debug if needed.
Review the definition of scalability. Ask how the scalability of systems is an important consideration when working with data sets, as the computational capacity of a system affects how data sets can be processed and stored.
Ask: What do the numbers mean in each line of output?
Use the questions below as prompts. Students share until the idea that parallel solutions scale more effectively is discussed.
Say: Computer scientists use the term scaling to mean how a process responds to increases in the size of the project.
Say: In the first activity we saw an example of a program where more than one computer processor on a computer could be used at a time. Sharing a task among the processors on many computers is an example of distributed problem-solving. Distributed computing shares components of a software system among multiple computers so a large task can be done in less time using the resources available to each of the computers - including both processing and storage resources. Students read the first 5 paragraphs of the OpenScientist.
Discuss: With elbow partners, discuss what are the advantages of distributed computing and choose the top three projects that interest you from the list on the OpenScientist.
Say: Parallel and distributed computing are powerful tools but they have their limits in terms of increasing efficiency.
Show this graph of Amdahl’s Law by Daniels220 at English Wikipedia, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=6678551.
Explain: The “speedup” of a parallel solution is measured in the time it took to complete the task sequentially divided by the time it took to complete the task when done in parallel. The 4 different colored lines represent four different processes with increasing portions that can be done in parallel. At first, they all speed up as more processors are used. After a time, adding more processors no longer speeds up each process. The sequential parts of processing limit the overall time.
Discuss: With elbow partners, students discuss why they think this is the case. Students share until these key points are verbalized:
Students return to the OpenScientist and choose the project that most interests them.
Say: Parallel computing consists of a parallel portion and a sequential portion.
Students identify a part of their chosen project that is done in serial and a part that is done in parallel.
Students are to submit a document stating their topic for research using Big Data. This document should answer the questions:
How is Big Data used to solve or remedy the topic?
Link(s) used to find Big Data? (i.e. data.gov, etc)
How has the transformation of data storage affected how data itself is used?
Answer: Storage and processing of large digital data enables us to analyze large data sets quickly rather than small sampling sizes as used before.
How can a computer use Big Data to make predictions?
Answer: Computers can use smart algorithms, powerful processors, and clever software to make inferences and predictions for solvable questions.
How can parallel processing help scale a solution to a Big Data problem?
Larger problems often have more parallelizable segments. The more processes that can be done in parallel the faster the overall process will become.