If problems weren’t problematic enough, it seems there is a prerequisite to solving them that many of us are unaware of. The first step of problem-solving should be to determine whether an intuitive or methodical approach is more appropriate.Intuitive problem solving is based on intuition, which is generally the brain’s ability to deduce conclusions based on emotions and learned experiences. Because there is no external influence while the process occurs within the mind, the route to the conclusion is a one-way street. There is no exact procedure, either, as no two people think exactly alike.The advantage of intuitive thinking is that it is lightning fast. Intuitive thinkers are also usually in tune with the feelings of others, which increases the likelihood of a widely acceptable result. Ironically, intuitive problem-solving falters when it comes to group discussions – the individual’s mental process can seem haphazard to everyone else and instigate conflict.Methodical problem solving, also known as rational problem solving, is as orderly as it sounds. The process involves a step-by-step consideration of facts and evidence and the logical conclusions to which they lead. It is, or should be, the same regardless of whether employed by an individual or group of people.That inherent consistency is the methodical approach’s biggest strength. It allows a large group of people to follow reason and arrive at the same conclusion to complex problems. On the other hand, it can seem unnecessarily long-winded to an individual and is rarely the first option when assessing an issue on one’s own.It may seem that the logical – and perhaps only – approach is to use intuitive problem solving as an individual and the methodical alternative at a group discussion. Generally, that is a safe assumption but the astute leader knows that it is better to pick the best of each world for the best results.Certainly, follow the methodical approach when explaining your conclusions at each step of the process. However, also remember that people are complex creatures – what is “logical” is not necessarily the most favored option.An example would be a $1,000 smartphone. If phone manufacturers created their products based simply on reason, they would be far less costly. However, marketing executives apply their emotional understanding to a pricing discussion. They conclude that the price exclusivity factor appeals at an emotional level to their target demographic and will perhaps improve rather than discourage sales.As an individual, don’t completely forsake methodical reasoning, either. While your instincts may be right on the mark, use step-by-step logical reasoning to buttress your opinion. In the workplace, it is a helpful way to build consensus and demonstrate to others that your natural talents are aligned with proven methods.
Statistical knowledge for teaching is not precisely equivalent to statistics subject matter knowledge. Teachers must know how to make statistics understandable to others as well as understand the subject matter themselves. This dual demand on teachers calls for the development of viable teacher education models. This paper offers one such model, which relies upon engaging teachers in design-based research. Teachers collaborate with a researcher to design, implement, and analyze instruction to pursue desired statistical learning outcomes for students. The researcher allows teachers enough autonomy to make and learn from mistakes during the process. Unpacking and addressing the mistakes has value as a means of teacher learning. The model and a specific instance of its implementation are described along with reflections on how productive mistakes during design-based research provide opportunities for fostering the development of statistical knowledge for teaching. A great deal of statistics education research is about understanding how students learn statistics (Garfield, 1995; Garfield & Ben-Zvi, 2007; Shaughnessy, 2007). A growing amount of statistics education research explores how teachers learn statistics (e.g., Makar & Confrey, 2004; Reading & Canada, 2011). Although research on how teachers learn statistics is important, an additional core issue needs attention: Along with learning statistics for themselves, teachers must learn how to help others understand it. In order to do so, teachers need to develop professional knowledge beyond statistical subject matter knowledge. For example, they must learn to assess and understand students’ reasoning (Pfannkuch & Ben-Zvi, 2011), and select tasks suitable for advancing students’ learning. Learning the subject matter of statistics is a necessary, but not sufficient, condition for teaching it. Shulman (1987) coined the phrase pedagogical content knowledge to describe the knowledge teachers need in order to make subject matter comprehensible to students. He called it a “special amalgam of content and pedagogy that is uniquely the province of teachers” (p. 8). Researchers in mathematics and statistics education have appropriated, refined, and extended Shulman’s notion of pedagogical content knowledge to describe the professional knowledge teachers need. One such notable effort was that of the Learning Mathematics for Teaching (LMT) project (Ball, Thames, & Phelps, 2008). The LMT model conceptualized mathematical knowledge for teaching as consisting of both subject matter knowledge and pedagogical content knowledge, and made hypotheses about the nature of each of these knowledge domains. In the LMT model, subject matter knowledge and pedagogical content knowledge are multi-faceted. Subject matter knowledge consists of common knowledge, specialized knowledge, and horizon knowledge. Common knowledge is that which is required across a variety of occupations, and is not unique to teaching (e.g., calculating statistics, understanding their meanings, etc.). Specialized knowledge is unique to the task of teaching. It allows teachers to appraise students’ non-conventional representations and approaches to problems. Horizon knowledge entails knowing subject matter beyond the school curriculum and allows teachers to steer students’ learning appropriately as opportunities arise (Ball et al., 2008). Statistical knowledge for teaching is not precisely equivalent to statistics subject matter knowledge. Teachers must know how to make statistics understandable to others as well as understand the subject matter themselves. This dual demand on teachers calls for the development of viable teacher education models. This paper offers one such model, which relies upon engaging teachers in design-based research. Teachers collaborate with a researcher to design, implement, and analyze instruction to pursue desired statistical learning outcomes for students. The researcher allows teachers enough autonomy to make and learn from mistakes during the process. Unpacking and addressing the mistakes has value as a means of teacher learning. The model and a specific instance of its implementation are described along with reflections on how productive mistakes during design-based research provide opportunities for fostering the development of statistical knowledge for teaching. A great deal of statistics education research is about understanding how students learn statistics (Garfield, 1995; Garfield & Ben-Zvi, 2007; Shaughnessy, 2007). A growing amount of statistics education research explores how teachers learn statistics (e.g., Makar & Confrey, 2004; Reading & Canada, 2011). Although research on how teachers learn statistics is important, an additional core issue needs attention: Along with learning statistics for themselves, teachers must learn how to help others understand it. In order to do so, teachers need to develop professional knowledge beyond statistical subject matter knowledge. For example, they must learn to assess and understand students’ reasoning (Pfannkuch & Ben-Zvi, 2011), and select tasks suitable for advancing students’ learning. Learning the subject matter of statistics is a necessary, but not sufficient, condition for teaching it. Shulman (1987) coined the phrase pedagogical content knowledge to describe the knowledge teachers need in order to make subject matter comprehensible to students. He called it a “special amalgam of content and pedagogy that is uniquely the province of teachers” (p. 8). Researchers in mathematics and statistics education have appropriated, refined, and extended Shulman’s notion of pedagogical content knowledge to describe the professional knowledge teachers need. One such notable effort was that of the Learning Mathematics for Teaching (LMT) project (Ball, Thames, & Phelps, 2008). The LMT model conceptualized mathematical knowledge for teaching as consisting of both subject matter knowledge and pedagogical content knowledge, and made hypotheses about the nature of each of these knowledge domains. In the LMT model, subject matter knowledge and pedagogical content knowledge are multi-faceted. Subject matter knowledge consists of common knowledge, specialized knowledge, and horizon knowledge. Common knowledge is that which is required across a variety of occupations, and is not unique to teaching (e.g., calculating statistics, understanding their meanings, etc.). Specialized knowledge is unique to the task of teaching. It allows teachers to appraise students’ non-conventional representations and approaches to problems. Horizon knowledge entails knowing subject matter beyond the school curriculum and allows teachers to steer students’ learning appropriately as opportunities arise (Ball et al., 2008).
If you work in data science or analytics, you’re probably well aware of the Python vs. R debate. Although both languages are bringing the future to life — through artificial intelligence, machine learning and data-driven innovation — there are strengths and weaknesses that come into play. In many ways, the two open source languages are very similar. Free to download for everyone, both languages are well suited for data science tasks — from data manipulation and automation to business analysis and big data exploration. The main difference is that Python is a general-purpose programming language, while R has its roots in statistical analysis. Increasingly, the question isn’t which to choose, but how to make the best use of both programming languages for your specific use cases. Python is a general-purpose, object-oriented programming language that emphasizes code readability through its generous use of white space. Released in 1989, Python is easy to learn and a favorite of programmers and developers. In fact, Python is one of the most popular programming languages in the world, just behind Java and C. Plus, Python is particularly well suited for deploying machine learning at a large scale. Its suite of specialized deep learning and machine learning libraries includes tools like scikit-learn, Keras and TensorFlow, which enable data scientists to develop sophisticated data models that plug directly into a production system. Then, Jupyter Notebooks are an open source web application for easily sharing documents that contain your live Python code, equations, visualizations and data science explanations. R is an open source programming language that’s optimized for statistical analysis and data visualization. Developed in 1992, R has a rich ecosystem with complex data models and elegant tools for data reporting. At last count, more than 13,000 R packages were available via the Comprehensive R Archive Network (CRAN) for deep analytics. Popular among data science scholars and researchers, R provides a broad variety of libraries and tools for the following: R is commonly used within RStudio, an integrated development environment (IDE) for simplified statistical analysis, visualization and reporting. R applications can be used directly and interactively on the web via Shiny. The main distinction between the two languages is in their approach to data science. Both open source programming languages are supported by large communities, continuously extending their libraries and tools. But while R is mainly used for statistical analysis, Python provides a more general approach to data wrangling. Python is a multi-purpose language, much like C++ and Java, with a readable syntax that’s easy to learn. Programmers use Python to delve into data analysis or use machine learning in scalable production environments. For example, you might use Python to build face recognition into your mobile API or for developing a machine learning application. R, on the other hand, is built by statisticians and leans heavily into statistical models and specialized analytics. Data scientists use R for deep statistical analysis, supported by just a few lines of code and beautiful data visualizations. For example, you might use R for customer behavior analysis or genomics research.
We keep moving forward, opening new doors, and doing new things, because we're curious and curiosity keeps leading us down new paths. While college teaches you a lot, I think what it might do best is drive a curiosity within you and a desire to learn more.Yes, you can go to class every day. You can listen to the lectures and discussions and pick up on the material. You can take the tests, get good grades, yada yada yada. Of course, this is all important; a good GPA will help you earn jobs and other opportunities down the road. But if you really want to strive to make the most of college curriculum, I think being curious is the best way to do so.Learning something only to learn it for the test and then forget it seems a little pointless, right? After all, you should learn to understand something, whether it be a conceptual idea or technical skill. And if you truly want to flesh out your ability to learn something, being curious and asking “why” is probably your best bet.I am an operations and information management major (OIM), and when I tell that to people, it mostly just flies over their heads. In my coursework, I’ve been learning a bit about database structures, analytics, and the “internet of things.” If you told me three years ago this is what I’d be doing in college, I probably wouldn’t have believed you. I twice changed my major before settling on OIM, but ever since I started learning about the topic, my interest in the field has only grown. There’s already so much data out there (over sixteen trillion gigabytes, to be exact) and this number grows every day. I think what pushes my curiosity in the topic is the desire to have a part in handling it all, an important task that will keep society functioning on a day-to-day basis, help effectively communicate and influence decision making in both a resource-friendly and ecologically-efficient manner.One of my favorite summer reading books growing up was Rocket Boys by Homer Hickam, based on a true story of a group of boys with a rocketry fascination in a small mining town in 1957 West Virginia. Against all odds, the boys worked on building and launching model rockets, eventually working their way out of the cycle of life in the town in which most young boys grow up to be coal miners. Driven by their curiosity, the boys were able to reach a previously unfathomable goal. Taking this lesson and applying it today, it can inspire us to keep a curious mind.I think the Rocket Boys set a good standard to look up to; it’s never a bad thing to be curious, and often our curiosity pushes us to otherwise unreachable places. Behind everything we learn and everything we observe, there may be some context that we are missing, and understanding the context of these situations can only be achieved if we ask “why?” I’ve found this method of analyzing situations and asking the “why” question to be important in both my work here at the Honors College and as a reporter for the Daily Collegian newspaper, where asking why something is the way it is challenges an interviewee to reflect and give a more well-thought-out answer than just asking surface questions. However, not just asking others, but also asking yourself why you see something the way you do is a helpful self-reflection tool.