Thursday, December 19, 2013

What We Know About How Children Learn Math - And How It Can Help Us Close the Achievement Gap

(This week's post is the first half of a two-part article I wrote for Footnote1.com.)

When it comes to math, American students lag behind their counterparts in many European and Asian countries, as do American adults. Our nation’s fourth graders are outperformed in math by students from Singapore, Korea, Japan, Northern Ireland, and Hong Kong, while the U.S. ranks 19th in adult math skills among advanced democracies. These problems exist despite the fact that we spend $1.3 trillion a year – nearly 9% of the American GDP – on education. Why is such a promising system failing its students?

Read the full article...

9 comments:

  1. Clements included in a study from 2011 the fact that of all studies related to educational mathematics, less than 2% involved efficacy of curricula. Bhatt and Koedel (2012) found very little empirical literature of curricula; and MOREOVER, what is effective in one context is very often not effective in another. Why is this surprising? Problems: how do we research curricula; who has access to students; what students; how long do we continue the studies; in today's light speed technological changes, how do we integrate curricula that will change based on any number of valid issues; who is paying for these studies...??? We need strong collaborative relationships between all stakeholders, data bases for storing findings AND ways to access those findings seamlessly...and a whole host of other things I don't know that I don't know we need! In the case of your former work, it doesn't scare me that teachers were part of the design...they SHOULD be...it scares me that once it left that initial process, efficacy wasn't tracked. (What I've created in MY mind doesn't always translate to what I produce; so I sure can't expect it to translate in what someone else is producing for me...where's the feedback loop?) But this isn't hopeless, we've never been in a better position to disseminate information...so, let's start looking and creating networks of knowledge sharing and innovation. Go...

    Bhatt, R., Koedel, C, (2012). Large-scale evaluations of curricular effectiveness: the case of elementary mathematics in indiana. Educational Evaluation and Policy Analysis 34, (4), 391–412

    Clements, D. H., Sarama, J., Spitler, M. E., Lange, A. A., & Wolfe, C. B. (2011). Mathematics learned by young children in an intervention based on learning trajectories: A large-scale cluster randomized trial. Journal for Research in Mathematics Education, 42(2), 127-166.

    ReplyDelete
  2. This article reinforces much of what we know as educators, their backgrounds, their individuality, and the products available to teachers.

    First, students do begin schooling with varied backgrounds: prekindergarten educations, vocabulary acquired from communication, and life experiences. Many of the children I work with live in single parent homes. I know how the parents struggle with finding time for their children, even for conversation. Knowing the background of the children that come to us is primary to being an educator. Using William T. Powers model for Perceptual Control Theory, teachers need the perception of the student to understand what we are controlling. In number sense, it is creating a network of patterns to connect quantities to numerals and written words. Without a firm foundation of these patterns, math performance suffers.

    Teachers also have to allow for the differences within each child. When working in a small group, is it better for a child to be with others of the same ability or is a mixed group more effective for each child? Does the teacher have the firm foundation needed to facilitate the instruction? They certainly should, and if not, they'd better fake it convincingly. Research shows math anxiety from educators results in students with math anxiety (Maloney and Beilock, 2010).

    Lastly, educators have a plethora of information and tools to choose their teaching tool and method. As you pointed out in the article, it does not necessarily mean it has been researched to be an effective means for teaching. Until educators begin to rely on the work that is being done for us by educational researchers, they will continue to spin their wheels at the potential cost of each student. Sometimes, progress is less about the achievement gap, and more about how to get teachers to care about the research that is available.

    Maloney, E. A., & Beilock, S. L. (2012). Math anxiety: Who has it, why it develops, and how to guard against it. Trends in cognitive sciences, 16(8), 404-406.

    Powers, W. T. (1998). Making sense of behavior: The meaning of control. Benchmark Publications.

    Lori Mayfield
    First Grade Teacher
    MBE Graduate Student, UTA

    ReplyDelete
    Replies
    1. Thanks for your thoughtful comment, Lori. I’d like to focus on one point in particular.

      > Until educators begin to rely on the work that is being done for us by educational researchers, they will continue to spin their wheels at the potential cost of each student.

      I agree that focusing on the research to practice connection is absolutely critical. Education is-puzzlingly-one of the few major areas of human endeavor that has not fundamentally changed much during the past century (or more) as the relevant sciences and technologies have matured. The reasons are complex, but there is certainly no shortage of new insights about learning and teaching coming out of systematic research that we could use to improve educational practice at scale.

      > Sometimes, progress is less about the achievement gap, and more about how to get teachers to care about the research that is available.

      In my experience, the hurdle is less about getting teachers to *care* about the available research and more about making that research *usable* to them. Having an explanation for why a particular child has difficulty reading text, for example, doesn't make it immediately obvious how that teacher should act differently. That requires a whole other systematic (applied) research phase. Currently we have far too few people who are able to translate scientific models of learning into practical models and guidelines for practice.

      One proposal for doing this that seems to be widespread on the researcher side is that we should train classroom teachers to be critical consumers of the scientific literature. The idea would be, for example, that all K-12 teachers should be trained to read articles on brain imaging studies of dyslexia, and thereby gain insight into how to teach dyslexic students.

      This may seem reasonable on the face of it and it seems to be a widespread notion (I heard it voiced by more than one person at the International Mind, Brain, and Education Society conference a few weeks ago).

      Personally, I don't see how this could possibly work at scale, though. It takes 10 years to master a complex domain like neuroimaging. There are several such scientific domains that could feed into educational practice. The people who go into teaching are not by and large the people who are most suited by training or disposition to read and make sense of such literature. And there isn’t time in the teacher's workday to look up and read these articles on an ongoing basis-it can take hours to read and make sense of a single article.

      And then there's the problem of translating an explanatory scientific model (of dyslexia, for example) into usable techniques-and validating that the translation has been done correctly and that we get better outcomes. That is a separate research project, and requires special skills separate from being able to read the literature.

      I am very glad you raised this issue because I see this as the single most important central challenge that the field of Mind, Brain, and Education (MBE) must address immediately. We have a lot of currently un-usable scientific insights that could enable us to improve education dramatically at scale. We have to first make them usable for teachers at scale. But before we can do that, we have to figure out how to translate and who will do the translation work. In the same way that teachers are not equipped to translate from the scientific literature, most scientists and researchers are not equipped to translate into educational practice.

      My own view is that we need to create a system that can support distributed coordination so that each party can contribute what they know best to the translation and draw what they need most for their own work.

      I have elaborated on this particular proposal in a book chapter:
      Connell, M.W., Stein, Z., and Gardner, H. (2012), Bridging Between Brain Science and Educational Practice with Design Patterns, in Della Sala and Anderson (Eds.), Neuroscience in Education: The Good, The Bad, and the Ugly.

      Delete
  3. The article, originally written almost a year ago, is still relevant to the current issues in education. In particular, Mathematics has become the new focus of education in the United States with teachers facing the ever-present pressure to prepare students in STEM.

    Teachers face the pressure from their campus administration and even from district administration as teachers are evaluated on their practice more each year. This pressure to increase student achievement and close the gap in order to prepare students for future careers in STEM seems to lead teachers to look for the "next big thing" that will help them prepare students in the classroom. This situation creates the opportunity for educational products and strategies to be created that are not cemented in scientific research and have not been proven to be effective. Then as teachers implement these new products and strategies that fail to obtain the expected results, anxiety arises, leading to poor results (Maloney & Beilock, 2012). The teacher could have gained a better understanding of their students' learning when using the new product or strategy through the implementation of multiple feedback loops within a lesson, but that would only be possible if teachers are shown how to create these feedback opportunities (Chan, et.al., 2014; Powers, 1998).

    I agree with Lori that the pressure on having teachers close the achievement gap should be redirected in efforts to better train and prepare teachers in both content and instructional practice, since lack of proficiency in the former leads to teacher anxiety and low student performance, and unstructured instruction without feedback opportunities for both teachers and students leads to a superficial gleaning of student understanding. In addition to helping teachers in their instructional practice, teachers should also be taught (alongwtih administration) how to search for reliable, valid research and be able to use the findings to guide instructional practice. Teachers should have greater knowledge of and attend conferences led by scientists (not only by other educators speaking from their personal experience), such as the one held at Fort Worth, Texas, USA by the International Mind, Brain and Education Society, where educators directly asked the scientists their questions and could give them ideas for future research.

    To further student achievement, I think that besides teachers and scientists partnerships, different countries should create partnerships where they can collaborate about the different ways each country teaches each subject. I am sure that all parties involved could benefit from such a collaboration.

    In sum, in order to close the achievement gap, whether it's between students from different backgrounds or between students in different countries, three things are necessary: (a) teacher preparation in mathematics content, (b) teacher preparation in instructional practices based on reliable, valid and peer-reviewed research, and (c) true collaboration between teachers and scientists.


    References:
    Chan, P.E., Konrad, M., Gonzalez, V., Peters, M.T., & Ressa, V.A. (2014). The critical roles of feedback in formative instructional practices. Intervention in School and Clinic, doi: 1053451214536044.

    Maloney, E.A. & Beilock, S.L. (2012). Math anxiety: Who has it, why it develops, and how to guard against it. Trends in Cognitive Sciences, 16(8), 404-406.

    Powers, W.T. (1998). Making sense of behavior: The meaning of control. San Diego, CA: Benchmark Publications.


    Mara L. Alvarez-Delgado
    Bilingual Teacher
    MBE Graduate Student, UTA

    ReplyDelete
    Replies
    1. Hi, Mara. Great comments!

      Some of my response to Lori is relevant here, too. In addition, you raise another important issue I'd like to focus on.

      > This situation creates the opportunity for educational products and strategies to be created that are not cemented in scientific research and have not been proven to be effective.

      I agree with what I take as the spirit of this point - that we should be taking a more rational approach toward educational design and practice. That is a central premise of this entire blog, in fact.

      I think we have to be careful how we set expectations for the role of the science, however. If we insist on only allowing strategies and products that are "scientifically proven" to be effective, I think we would never see any progress at all. The reality on the ground today is that almost none of the products and practices currently in use in mainstream education have much if any scientific evidence on efficacy to support them - despite decades of intense attention. If we set a bar that for anything new to be adopted, it first has to be scientifically proven, nothing new would ever be adopted - the only things teachers would have access to are the current offerings or ad hoc strategies and products that they make up themselves and that don't go through a formal vetting process. In addition, science doesn't ever really "prove" anything, so it's not clear what that standard would entail in any event.

      Instead, I think we need to focus on ensuring that we have a more rational process for developing educational products and strategies, and that we establish methods to validate what we are introducing is better (or at the very least no worse) than the products and strategies currently in use. That's a much more realistic standard and one that would support continuous improvement over time. If we take the scientific / rational approach seriously, we would also design our products and the systems in which they are embedded to enable us to figure out how to improve them over time - never assuming we got it "right" or that the product was really "done" once and for all.

      In short, you are absolutely right that to move forward we will need to close the feedback loops: with (at least) students, teachers, administrators, curriculum providers, and learning researchers so that everyone has the feedback they need to do their part well and to improve over time. We definitely have the know-how in terms of the learning science and the technology to make this happen today. I've personally been involved in building more than one evidence-based, scalable educational technology platform that supports it. For example:

      Lexia Learning Systems - Cross-Trainer: Visual-Spatial
      https://www.youtube.com/watch?v=I6mdYDlMQH4

      Native Brain
      http://www.nativebrain.com


      Delete
  4. Pigso Learning is the best exam app in Gujarat. It is available language Gujarati, Hindi, English. So you can download our app for exam preparation app.

    ReplyDelete
  5. Zoho people online HR management software is the perfect choice to run your business smoothly.

    Cheap HR Software

    ReplyDelete


  6. Nice blog. I Love reading all your blog post. it feels so nice and enjoying reading your blogs.

    https //www.twitch.tv/activate

    ReplyDelete
  7. I read this article. This is a very real problem that colleges face, for example, when looking at student transcripts. I encourage you to publish more information so I can gain more knowledge on this topic. Now it's time to get https://locksmithleedsservices.co.uk/ for more information.

    ReplyDelete