Archive for Learning:

September 28, 2011

By jleffron

Comments

2 Comments

Posted In

Learning, Rigour

Critically Flawed

We have met the enemy and he is us.

Multiple Working Hypotheses vs Pet Theories

It was “smackdown” time in my brain last week. I fell into the situation quite innocently.

When I’m working to create change, or to build something new, it’s easy to become so busy being the ‘champion’ for my project (or ‘evangelist’, if you like) that I forget the need for Multiple Working Hypotheses.*  This is understandable.  If I decide to work very hard to develop “B” in response to flaws in “A”; my (non-objective) gut feeling is that I’m fixing things. I start to feel like “A” and “B” are the only games in town; I’ve done something good here by replacing erroneous “A” with meritorious “B”. But it could well be that “B” is as flawed as “A”. If “A” is ‘wrong’; and “B” is different from “A”, it does not necessarily follow that “B” is ‘right’, only that “B” is not “A”. To do my work well, I really need to consider (create, if necessary) “C”, “D”, and possibly “E”.

This reality hit me smack between the eyes at about 7:00 on Saturday morning. Right up until that moment, I’d seen the my pet project of several months as this lovely shiny solution. But as I stood in the kitchen, waiting for the coffee to finish brewing, every weakness of my approach became clear to me.  The fact that another approach was problematic in some areas did not make mine right, even if I felt I was addressing a particular set of flaws in the old approach.  I’d become my own worst enemy – lack of critical thought left my project critically flawed.

It’s tough to not become enamoured with a particular theory or approach, and that’s where Multiple Working Hypotheses come in handy. Having more than one working hypothesis not only cuts back on things like confirmation bias, it makes for stronger research.  Entertaining multiple theories leaves my mind open to new connections often by pushing me to examine a wider body of data or evidence. If I don’t have a pet theory I’m nurturing along, I’m more open to new insights and surprising (but true) conclusions.

I’m back to the drawing board with my project. It’s not a blank slate, by any means.   The work I did before still has merit but it was only a partial view of the whole picture.  So I have a bigger drawing board now, with lots of room to examine a wider set of ideas. A little dose of objectivity diverted my work; the end result will be much stronger for that diversion.

 

 

 

 

* It wasn’t until I started writing this that it occurred to me that the concept of Multiple Working Hypotheses, coming from the work of a 19th century geologist, was commonly known in Geology departments, but not something I’ve heard much discussed in other disciplines.

September 26, 2011

By jleffron

Comments

No Comments

Posted In

Blogs and Presentations, Learning, Rigour

In Praise of Critiques

I don’t like criticism, which probably puts me in pretty good company. But I like critiques very much.

Yesterday there was some lively discussion on Twitter  about the use of public critiques as a learning tool. The phrase “learning tool” is the key – critiques are about learning, not correction.

Critiques are not a new tool for learning, many university Music and Art Departments post their guidelines for effective critiquing online. It’s not a tool restricted to the world of performing arts; at it’s best public critiquing is a civilized discussion of a person’s work, looking at what’s good, what’s lacking, what other avenues could be pursued. It’s a blend of offering needed objectivity and providing additional germane resources. It’s discourse, it’s learning, and it can be a bit unnerving to the uninitiated; but that last bit doesn’t have to be the case.

Consider the popular site Critters.org where there are loads of people actively seeking critiques of their novels and short stories. Why would someone do this? If you’ve written a piece of fiction, even if you (rightly) suspect that it’s absolute dreck, you’ll be irrationally fond of it, protective of it; you love it like your firstborn child, so why seek out critiques? The answer seems to be simply this: for writers, improving in their craft matters more than their fear of hearing something negative about their work; they know they need that objective outside voice.

Common Objections

Supposing you’re sold on the notion of critiques. It’s still not all sunshine and daffodils, because there a few hurdles to clear before most people are going to jump on the “Critiquing is Awesome” band wagon. And they’re big ones.

If critiques are going to be effective as a learning tool, there are some logistics to be addressed, in the form of trust, culture (individual and organizational), and personality. Some of these are clearly not in your control; but you can usually address enough of them to be successful. Being aware of obstacles allows you to address them up front in your learning design.

As an example in some organizations the nature of the work is such that “failure is not an option” (things like nuclear power plants and neurosurgery come to mind) but that’s the big picture view the performance of the job. During the learning process, it has to be clear that critiques are not indicators of failure but stepping stones to success – or at least are akin to bumper bowling, keeping you from drifting too far off target.

Of course, there may be some cultures where critiques are a bad match. In a company with an up-or-out policy critiques may turn to daggers pretty fast.

Making it Work

In most circumstances it’s probably not the best plan to jump in with full throttle critiquing.

The whole notion of critiques is a bit contrary to our nature, or at least to our habits. It’s a bit like going for a swim in very cold lake – most of us like to ease in a little at a time; jumping in right off the bat is likely shock all but the most hearty folks right back out onto the shore. So, you need to help people want to swim in the (metaphorical) water, and then make it as painless as possible for them to ease into it.

Let’s look at back at those writers who willingly seek critiques – what’s going on there? In general it boils down to a sense that “the success of my project; the excellence of my work has risen to a higher priority than my fears or feelings”. We might call this “engagement” in the learning world. There has to be a sense that what is being achieved worth the investment.

So, let’s say you’ve done your up-front work and you have group that says “we’re so committed to excellence in understanding that what we want mutual public critiquing here so we can maximize our learning curve”. You still have real people with real fears and feelings, so how do you ease them into critiquing?

The first step will be tough: as an instructor you need to minimize the teaching and leave a void for people to fill. Cover the principles and potential pitfalls of the subject, but then turn folks loose. Critiquing is participatory by it’s nature. Too much noise from the instructor tends to silence the other voices; provide guideposts as needed, but let people wander around a bit and find their way.

Imagine the following scenario: you’re an instructor trying to build critiquing in your class. What might you do to get the conversation started?  One approach might look like this:

  1. Step away from spoon-feeding information and then fill that gap with brain-work. You have your learners “Read 5 of these 8 sources (and others if you like), then write an opinion/commentary on 2 of them. Leave comments on at least 4 other posts; your comments should be substantial. Bring in relevant information from other sources to support your comments as needed.” Repeat every week and pretty quickly comments evolve into meaningful, complex discussions. The learners realize that critiques aren’t red marks on paper, they are conversation; invitation to discourse.
  2. Build up to the big time. When there is a high comfort level, you might move onto oral assessments, taken collectively where students can amplify or refute comments made – shared marks on this kind of assessment can lead to a group of learners really working to build the best possible responses to questions. In this setting the only failure is lack of preparation, thought or constructive contribution. I’ve seen this work in real life – the students all agreed it was the most they’d learned in a class, and the most excited they’d been about preparing for classes.

Make no mistake: the instructor does not get off easily in this format. Sure there’s less lecture on information, but the workload shifts to preparation and mediation Curating a list of readings and references, creating the discussion formats, addressing group culture and checks/balances that will work with the learners at hand will all take some serious time on the front end. Even more important will be knowing the learners, knowing what’s going to convince them that (insert topic here)  is vital to them.

A lot of time especially early on, may be spent providing mediation in conversations in order to keep balance, allow voices to be heard, and redirect non-constructive approaches. This requires a bit of a deft hand, and more than a dash of diplomacy in some cases. But it’s really a big chunk of the learning – learning how to exchange ideas (even differing ones) rationally and respectfully; recognizing that you just might be wrong about some things; and it may be the others are also quite wrong, but collectively you might all hash through the wrong and get to something approaching “right”*. In particular, if this is the first time learners have walked through this model, there may be a learning curve from violent disagreement (or violent agreement) to actual discourse.

When done well, critiques a bit like Beta testing for ideas. Ideally in the course of discussion, terms are defined, assertions are supported and a spectrum of opposing opinions are allowed, all of which refines the original work.  And like Beta testing, ] the learning comes from hashing through the “Whys” in the critiques as much as from the “Whats”.

April 7, 2011

By jleffron

Comments

One Comment

Posted In

Learning

Expert Opinions

Reflections on the 500 Words Project

Looking back at the Purpos/Ed 500 Words project, what strikes me most is not any particular post (although there were a number that were impressive and thought provoking).  It is not the occasional debates that cropped up; although a few got rather vigorous, I think often even participants in the most heated discussions were (at heart) in violent agreement with each other, at least as far as their deep commitment to students.

What sticks most clearly in my mind are events that never made into my 500 Words.

Meeting with the ‘End Users’

Two weeks ago, I was sitting by a basketball court, doing some final editing of my post, while a group of about 20 teens were shooting hoops and joking around, in a game that seemed to have no recognizable rules. Periodically someone would stop by and ask what I was doing.   I realized I had a golden opportunity – here was a group deep in the midst of their formal education and who are mainly remarkable in that they are very comfortable in their own skins; they don’t feel a need to impress.

So, after I told them what I was doing, I asked them:

“What do you think is the purpose of education?”

It’s not surprising that the first reaction was something along the lines of “It’s so when we grow up we can get jobs and have a good life”, often delivered with a tone that implied they viewed this platitude-like response with a hint of contempt, not fully doubting it, but wearied of having it said to them so often.

But then, something would happen.  They’d catch themselves mid-answer, and stop parroting what they figured was expected.   And you could see them thinking; see their eyes lighting up with inner conviction, it seemed like they were no longer talking to me, but having an inner conversation.  Almost without exception, they would start talking about education being about something more than jobs or personal gain, about how it’s purpose was to equip them to to do something of worth for their country, or for the world.

At a fundamental level, they each saw the purpose of education as leading to something bigger than themselves. And were willing to do the work it took to get to that “something bigger”.

Those conversations left me impressed and hopeful for the future.  They also made it clear that learners of any age have little tolerance for learning if they see it as pointless, but have impressive drive if they grasp that the work they are doing is a stepping stone to something bigger, something that matters.

 

 

March 25, 2011

By jleffron

Comments

6 Comments

Posted In

Learning

The Purpose of Education

There is a distinction between the purpose of education, the purpose of schools, and the purpose of learning. The three may sometimes share some overlapping terrain, but they are, nonetheless, discrete topics.

The purpose of schools can be any one of a myriad things depending on who “owns” the schools and what they want or need to achieve through them (whether that ownership is in the hands a government, community, industry, or other individual or organization). This is fully reasonable: whoever is running a school does so to fulfill their specific goals and agendas.

The purpose of learning is a bit nebulous to pin down. Learning may serve a practical purpose or meet a need (learn to read, learn to fix a leaky pipe, learn to say ‘please’ and ‘thank you’). At other times, learning may motivated purely by personal interest, like a friend who learned to read ancient Chinese simply because it interested him. Any practical effects (e.g. building mental discipline) were merely ancillary; they were not the reason for learning.

The purpose of education goes back to the root word, educare: “to bring up”. That begins to suggest something. Bringing up children (or puppies, or gardens) involves care, nurturing (and perhaps a bit of redirection or pruning); but “bringing up” does not add to what is already there, it merely allows it to develop.

Taking things one step further, there is the related root, educere: “to draw out”. Someone is drawing something out of the one being educated, maybe it’s a teacher, a colleague or even one’s self. If something is being “drawn out”, it had to exist in the first place.

In short:   The purpose of education is to allow people to reach their full potential.

That’s not as slippery a statement as it might seem. Full potential resides in the individual, not in jobs, tasks, families, or societies (those are where individuals put their potential into play). Since “full potential” is an individual thing, it can cover a lot of territory: some people are going to be clever at math, some are artistic, some observant, skeptical, optimistic… That all fits under the umbrella of potential.

For a person to achieve their full potential takes hard work, and there are three key elements to drive a person to do that work: Need, Hope, and Opportunity.

Youth in poverty may see the Need for education but might not have Hope  or Opportunity. Youth in affluent places may have buckets of Opportunity, but might see no Need;  they have everything, why work to become more than what they are? But when you have all three elements converge, it’s a powerful thing. It’s the girl who grew up picking through garbage at a dump in the Philippines, who saw the Need for education, held on to Hope and was given Opportunity, and went onto get a college degree so she could make things better in her community. It’s the ordinary middle class kid, who uses their Opportunities, sees a Need and goes beyond complacency into excellence motivated by Hope.

Ultimately the purpose of education is not rooted in politics, ideologies or educational theories, those are more likely to get in the way. It’s rooted in individuals who reach their full potential, and then turn around and use that potential to help the next generation find their own Need, Hope and Opportunity so they can reach theirs.

February 16, 2011

By jleffron

Comments

No Comments

Posted In

Learning

The Mooreeffoc Effect: Learning Through the Wrong Side of the Door

Through a glass darkly…Through a door backwards

I’ve been familiar with the Mooreeffoc Effect but hadn’t given it a lot of thought recently until I started working on world building for a digital learning project.

Mooreeffoc is taking the familiar but looking at it through a new angle, shining light into a cobweb laden, dusty corner, taking the familiar and making it unfamiliar. Dickens makes reference to the term as coming to him when he abstractedly noticed the words “Coffee Room”, backwards, on the wrong side of a glass door. It transformed the familiar and prosaic into something new and captivating.

Mooreeffoc allows you see the same world through new eyes, it strips away the dulling film of paradigms and assumptions and lets you really see what is there.

Why it Helps

When we try to teach something (or learn it) we’re usually burdened with the familiar – with thick layers of assumptions and generalizations (based on experience) wrapped around us like down cushions so thick we’re well insulated from any real impact.

It is difficult to learn new things when the “space” or “subject” is one we’re quite at home with. The mental filters are up; one will only see what they ever and and always see… And in general will do what has always been done. We anticipate what comes next and are planning our responses well ahead, based on what we know.

Brains are like that – they find efficiencies so that there’s space left for the tricky bits; that’s great if you’re in the forest wanting to avoid becoming dinner for the local mountain lion, but not so good for workplace innovation.

When building a learning environment, the trick is to not to have the situation so foreign that all of a participant’s mental energy is used up in managing/processing the environment but just foreign enough that it’s possible to see new details, form new connections, do things differently. Mooreeffoc.

To do this, requires a learning environment that’s real enough that participants can immerse; if they can easily “see outside the walls” the illusion can quickly break down and the Mooreeffoc effect can lost before its done any good. It also helps if the environment allows discovery, not imposition or exposition of the discoveries of others. The flash of understanding that Dickens’ got from his coffee shop door was a very different experience to what you or I receive just reading about it.

In a Walled Garden

Now, that’s not as tough as it sounds, you don’t need a 3-d immersive environment, just some careful and thoughtful learning design (using the words “careful” and “thoughtful” quite literally, here).

What helps in this effort is that our lazy brains actually don’t WANT to see outside the walls… I see this in myself all the time.

I can read Vilenkin’s “Many Worlds in One” (physics for the non-academic) and the book might be right, or it might be pretty much rubbish – but it’s just enough beyond my inherent knowledge base that I can’t see over the walls to know for sure. And I’m really pretty happy to stay that way – I’m not bothering to do the mental equivalent of fetching the step ladder and a spy glass. It jibes enough with what I do know, and I find Vilenkin’s conclusions appealing. So it doesn’t actually matter to me if I don’t pull down the math texts to verify… Or does it matter?

I occasionally catch a popular technology talk online. Some of them are okay. Some of them pretty much annoy me, because I do know enough to spot the false rigour, unsupported assertions, or the selective presentation of facts to support a thesis. It makes me think of Dr Fox or the Sokal affair. I know the talks aren’t supposed to be deep research, and people like an appealing assertion with an engaging delivery, and happily accept the conclusions (much like me with my book). But in this case my knowledge is often sufficient to see well outside the walls.

And I realize that an intriguing, attractively presented error really is a problem.

Back to the Right Side of the Door

Which comes back ’round to the Mooreeffoc effect. Creating a compelling learning environment may help with creative thinking, but there are times you need look beyond the illusion and see what emperor’s tailors are really making….

In a well designed Mooreeffoc space, you’ll not quite be able to take anything for granted; the moment you do you’ll be put off balance by the results or effects. It may be that building that habit of looking for the unexpected in the environment will waken us enough that even if we are not looking outside the walls, we can at least ask good questions about what we’re seeing inside of them; and transfer that knowledge beyond the walls later on.

But if we’re really trying to foster self-directed learners, it’s defeating the purpose to keep them confined in a walled garden of someone else’s making. The learning environment may be bright and enchanting, and realisitic, but isn’t quite real, which makes it quite easy to lead someone, quite literally, down the garden path.

In the end we have to go back to the origins of Mooreeffoc – it’s not just possible for it to work when we look beyond the walls; it’s imperative that it does work.

Done right, a learning environment will push us to look outside the walls, once we’ve enjoyed our time in Mooreeffoc. You can’t always be sitting inside the coffee room- at some point you need to step outside look at the world with fresh eyes and new ideas.

November 21, 2010

By jleffron

Comments

No Comments

Posted In

Change Management, Learning

Snake Oil, Skepticism, and the End of Discourse

There must be something going on in the hive-mind that is the internet.  I’ve run across a half dozen references to skepticism in the past half hour.  All of which reminded me of something I wrote a while back.

It’s not any easy thing to be a skeptic.  It’s quite common for someone to express skepticism about something new, and then find themselves in a hailstorm of irate responses along the lines of:  “Oh, they’re just hidebound old fogies who can’t accept innovations or new ideas.”   Maybe, though, there’s a reason they are skeptics.  Maybe they are all for innovation, but are simply asking the right questions because they are just a little better and faster at analysis than the average bear.

People like new ideas (or even old ideas that have been dressed up in bright new clothes).  There is always that hope that there is that magic solution, that the “next great and wonderful thing” will actually live up to it’s promise and transform the world, or at least the workplace.

So “New” sells.  It sells books and products, it generates prestige…  And those who are clever enough to say “yeah, this is great in concept, but how are you going to handle…?”  tend to be dismissed as too old fashioned or resistant to change simply because it is human nature to want that new solution to be perfect; we like the fantasy (or cling to the hope) that somewhere there is a ‘silver bullet’ solution.

This is a big problem, because it removes the opportunity to address those potential pitfalls at an early point of adoption, often preventing bigger issues downstream.  We’ve all seen the “next great thing” fail to live up to it’s promise in the workplace, in education, in technology. And some of those failed initiatives need not have failed, perhaps would not have failed, had both the proponents and the skeptics sorted through issues at the front end instead of writing each other off as unrealistically naive and cynical respectively.

All of which begs the question: When did phrases like “Can you give me some data?”, “How does this really work”, or “There are some issues that need to be addressed”, start being heard as wholesale rejection of an idea?  When did human minds become so narrow (or egos so fragile to criticism) that the standard response is that anything but absolute acceptance is deemed as condemnation?

It points to a larger problem – the end of discourse.

Fingers could be pointed a lot of ways in this.

Educational systems that purport that they want students to “learn how to learn” but don’t teach them logic or rhetoric or any of those other old fashioned topics that allow for examination and conversation around all angles of a situation?

The business world, where the model has shifted from building businesses that will last and thrive for years to come, to merely seeking to make the best possible numbers for this quarter.  In this situation people want a quick win; there is no time for, and no interest in, real long-term viable solutions.  This model is not only systematically starving and killing off the flock of geese that lay the golden eggs, it also effectively puts employees in a perpetually defensive posture where opposing views or mention of flaws are viewed as threats to one’s career.

Regardless the origins, this is a problem that needs to be addressed because it is bigger than “Is [insert innovation here] good or bad or neutral?”   If we can’t ask real questions, it’s going to be pretty tough to distinguish between “snake oil” and a good idea that needs refinement.  A lot of good ideas are going to get lost in the shuffle if there is not room to ask the hard questions that will take those ideas beyond the initial burst of enthusiasm to a point where they can reach their full potential.

November 19, 2010

By Michael Effron

Comments

No Comments

Posted In

Change Management, IBP/S&OP, Learning

The (Re)Learning Organization: Beyond the Training Department

With all due respect to Peter Senge, I think that in many cases we don’t need a “Learning Organization” as much as we need a “Re-Learning Organization”. Please don’t misunderstand this and think that I don’t believe we should create Learning Organizations, I can be a die-hard idealist and I love the vision of

…organizations where people continually expand their capacity to create the results they truly desire, where new and expansive patterns of thinking are nurtured, where collective aspiration is set free, and where people are continually learning to see the whole together.” Peter Senge, The Fifth Discipline

But I think that this overlooks the time dimension of business.

Everything Changes. Everything Remains the Same

As I look back over the past 25 years, I realize how many things I’ve learned through a series of transformative business initiatives. Quality Improvement (what is now called Six Sigma), Problem Solving and Decision Making, S&OP/Integrated Business Planning (IBP). All of these were major corporate initiatives which provided extraordinary benefits.

And then we moved on. Some retired, some moved to other companies, some were downsized. And the corporation remained.

The belief seems to be that the learning remained with the company. But the reality is that the learning remained with the people, and the company only retained the learning as much as it retained the people, and perhaps even only when it retained the people in the same roles.

We see the symptoms of this all the time. “The process degraded.” “We used to be good at this.” “We stopped doing that years ago.” Is there any less need or opportunity for six sigma improvements, or integrated business planning? I don’t believe so, but those people moved on, and the organization lost the learnings and ability.

I think we forget that the things we learned were acquired over time through many projects and initiatives. I look back to how I learned Continuous Improvement. A corporate initiative was started, sponsored at the highest levels. It wasn’t for any noble reasons, it was because one of customers told our CEO that, compared to all of his other suppliers, we had the worst quality. This put the wind in our sails, and we were off on an important journey. An outside professional came in and taught a group of us the principles and mechanics. We started up some pilot projects, and each of us was given the time needed to complete them. More importantly, we learned through our mis-steps and our successes. Our professional returned and gave us more education, and we became internal experts who could train others as a part of their job. We could have received this training on day 1, but without the prior experiential learning we would not have been successful. And through our own trial and error, we truly learned how to both do and teach. We launched more projects to expand the knowledge, experience, and benefits until three years later we touched nearly every employee in the company.

And then I moved on into another role, and stopped teaching and coaching others in the process and techniques. The company stopped creating new internal experts, until finally the next generation of people had no one to teach them the language, the techniques, the mind-set, and the benefits. And the process faded away. Not because the processes and cultural improvements we put into place were no longer needed, we simply forgot to teach the new people what we had learned.

A Re-Learning Organization

I would posit that in this age of increased mobility and short tenures with companies, the biggest barrier to corporate success is the loss of what the company has learned. We need to create re-learning organizations. Of course it’s impossible to transfer all internal knowledge, but we need to identify those critical learnings that need to be sustained, transferred, and enhanced; starting with the learnings from business initiatives such as Six Sigma, IBP/S&OP, and Marketing Excellence. And who can identify these critical learnings and acquired knowledge? This clearly resides with the people doing the work, either the internal experts who lived through the initiatives, or the functional experts who have learned or developed the “tricks of the trade”.

To become a Re-Learning organization, our companies need to make a few key investments. Most importantly, they need to allow employees to spend time learning, from both the delivery and acquisition sides. At it’s best it would look like what Senge described, ” …organizations where people continually expand their capacity to create the results they truly desire…” But at a minimum, they need to set the expectation and provide the time for the experts to share their knowledge, and for all employees to learn in both formal and informal situations. Additionally, companies need to make their leaders accountable for the learning of their people. This should not simply be passed on to the training department to manage for everyone, the experiential knowledge resides in the minds of the functional employees and cannot be captured in a training manual or PowerPoint presentation. The functional leaders need to continually provide the opportunities for experiential learning, both within their function and cross-functionally. While we cannot continually create the implementation experience for everyone, we can provide the projects or events through with experiential learning will occur.

Re-Learning Realized

You might think that this is just idealistic thinking, but I’ve seen companies succeed with this, and experienced it personally as well. In one business, the S&OP process was sustained and improved, even as General Managers came and left. They had one individual was the torchbearer for over ten years, and he coached each new GM and any other new employee in the “way that the business run’s through S&OP”. In another business, the Six Sigma process was in use throughout every department even though the original implementation occurred many years earlier. Here the torch was passed to new members of the leadership team. These examples highlight the two elements that I see time and again in Re-Learning organizations: a deeply held corporate belief that the “things we learned and the way we do things” are integral to the business’ success; and a torchbearer who ensures that everyone is taught the key learnings when they join, and then applies those learnings in the course of their work.

Building a Re-Learning Organization isn’t difficult, it simply requires a clear view of what needs to be sustained, and the personal leadership and commitment to sustain it.

November 18, 2010

By jleffron

Comments

No Comments

Posted In

Change Management, Learning

Should it Scale? Non-scalability as a reality, not a problem to be solved

I was reading a post by Bob Marshall, nodding in agreement with much of what he wrote.  I’m not in the software development business, but I often see the  same problems that he describes relating to good work:   [those who] “know how but can’t anyway because of where they work, who they work for and because of all the monkey-wrenches being lobbed into their daily routines…”   He was speaking of the software industry, but what he describes is not an uncommon issue, in any field.

I’ve run into similar scenarios, and one common factor among them is the general perception that every solution, every process, every approach, ought to “scale”.   Since, in most business circles, continuous growth is viewed as not just good, but essential, the desire for universal (and infinite) scalability of processes and procedures is understandable from the standpoint of efficiency.  Scaling may well streamline administrative functions (legal, HR, finance), but it is important to recognize that if certain aspects of a business are readily scalable, others (e.g. Operations, R&D), perhaps, are not.  This non-scalability may not indicate a problem to solve, but a natural attribute of how human beings and communities really work. Companies see themselves as a single expansive entity (and therfore embrace the model that universal, one-size-fits-all procedures are beneficial to organizational effectiveness), when in fact they are often effectively a bunch of boutique organizations welded together in a common enterprise.  If you talk with people in different functional groups of an organization, you know this; each group sounds like it works for a completely different company than the others.  What are often called silos are really the front doorsteps of the different small communities.  And how one group learns or produces will not translate directly to how another group does. Whether the boutique (or community) model is most “efficient” on an algorithmic scale, isn’t the point.  The point is that it is how human beings actually interact.   No matter how much you scale up an organization there will always be points of functional disconnect between groups in their specializations and one-size-fits all codification of the larger organization.  Humans will continue to interact in small connected groups and build their own, most effective approaches.  Universally scaled-up practices, while efficient, will not necessarily prove effective with respect to quality or productivity within the smaller, organically formed segments of an organization. Maybe the key to bypassing the “monkey-wrenches” that stifle good work is to recognize that learning design or software design (or any other business activity) should not be presumed to be infinitely scalable.  It’s always going to be a balance between efficiency and effectiveness.  So keep the uniform approaches in the arenas where efficiency matters, but also determine where effectiveness is the greater goal than efficiency, and shape the policies to match how the work really happens.

November 16, 2010

By jleffron

Comments

No Comments

Posted In

Learning, Rigour

Artificial Rigour – Searching for a Field Guide

booksRigour is a popular term in learning and training environments.  It gets trotted out a lot in marketing materials as well.  But the problem is that a lot of what get posited as “rigourous” is actually not.  In an elementary school textbook, a work place learning module, or a keynote presentation, you’ll find things that look like rigour, but that doesn’t guarantee that they are.

Someone who really knows a topic will spot false rigour in an instant – much as adults may chuckle indudgently (or cringe) at adolescents who attempt to pose as being much older. So, maybe the first question is ‘how do I spot an expert?’, because they are the quickest, easiest path to spotting false rigour. A real expert is often easily identified  by their ability to accurately reduce a complex concept into layman’s terms without losing the fundamental meaning.  Of course it might take a real expert to recognize that  was done properly – so that’s getting you into a worthless ‘infinite loop.’

Leaving us with a conundrum of the first order: it is very difficult to accurately call out artificial rigour without sufficient expertise. So, what’s a non-expert to do?

My first instinct was to look at the problem from the perspective fields like math and science (simply due to my own background).

In classroom texts it is not uncommon to find a sort of artificial rigor that was created to meet a list of criteria, as opposed to lessons rooted in true fundamental understanding and applicability.  The focus is not on a meaningful “why”, a reason we want students to learn something; it is rooted in lists and box-checking, which are themselves rooted in standards that have as much basis in perception and political agendas as they do in actual learning.

Box-checking driven learning has a high probability of being guilty of false rigour.    So that’s one warning signal, easily found, but it’s only a starting point.

What else comes into play?

We may not be experts on a given topic, but we can take what we know about expertise and use it as a guide.

A while back I wrote:

chalkboard equation“If I’m really, really good at, let’s say, math, then I may not have to stop and think about quadratic equations because I intuitively grasp them; but if asked, I clearly explain (in simple terms) why they have the solutions they have. If I am merely good at arithmetic, I can show you how to solve the equations (just by plugging numbers into the formulas) which might look like expertise to a novice, but is really just mechanics; in that case I know it works but don’t fully grasp why or how.  A lot of false rigour works the same way.

Simon Bostock countered these thoughts with the insight that being able to break concepts down into their component parts may (will) not work for all domains:

“I’m not sure true experts can always unpick and unpick. I think it depends, rather, on the domain.

Maths and physics are inherently unpickable, and the reputation of Feynman as a teacher, therefore, shines. Science depends on the principles of proof and peer-review so being a teacher (ie explaining stuff and testing that it’s been understood) is essentially the same as science. [Warning: massive over-simplification!!!]

But things like medicine, art and computer programming just have to work. We don’t necessarily care how the surgeon genius or the does-the-work-of-a-hundred programmer work. And we certainly don’t trouble them to explain themselves. In many cases, they probably couldn’t because it’s doubtful they’re aware of how they do it themselves – my feeling is that they’re drawing from as-yet-unnamed disciplines, and you can’t unpick things you can’t name” And he’s absolutely right about this…

Different fields having differing degrees of inherent “unpickability”. I can see in the case of, say, a violinist – they can ‘unpick’ the details of technique and tone production, but as far as (for lack of a better word) artistry – well that’s a personal thing, that’s not so readily broken down.  But then again, in that case, I would put the expectations of instructional rigor on the technical aspects, and not assign it to the area of personal expression or artistry.   But we still do need to look at what constitutes rigour (or at least expertise) in topics that are not inherently disectable.

The Role of Narratives

I was helping someone with a technical problem which they were grinding through it rather mechanically, without any real understanding (I could recognize this as I’ve been in the same situation).  I took a comparable problem and broke it down into logical components, but did so within the context of a narrative about the physical reality which the equations were describing.  The same person later was able to discuss another problem with me in terms of meaning, rather than mere mechanics.  They had crossed a threshold, perhaps not into expertise, but at least onto the path that leads there.

Expertise goes beyond merely breaking down a problem into component parts, it’s deeply tied into a narrative.  Real rigour has a narrative rooted in truth;  artificial rigour’s narrative is not entirely so – it looks almost like the truth, but on closer examination the narrative of artificial rigour is either rooted in superficial function, not understanding; or is rooted in fallacy.

We see artificial rigour in this guise in a lot of modern math curricula where elementary texts proclaim sub-sections to be “Algebra” when, in fact, the students do not have sufficient intuitive grasp of numeric relations for there to be any meaning to the work.    It looks like 8 year olds are ‘grokking’ algebraic concepts, but they do not truly do so because their mind is so filled with painful, tedious mechanics so they haven’t the mental energy left to grasp the intuitive connections.

The real narrative is one that shows mastery (and rigour), describing not “what is done” but “what it means”.

For non-techinical areas like Simon’s examples of music or surgery there are two layers.  There is the mechanical aspect of the work, and then there is, for lack of a better word, “artistry’.    If I am a reasonably capable technical musician, I can follow along and imitate styles and variations by talented musicians, but I don’t have the internal grasp to create my own riffs.  To an outsider on the right day from the right angle I might look like I know a bit, but really i’m just a reflection of those who do.  An expert would know that pretty much right off, for someone else it might take some closer scrutiny over a bit of time to realize I can’t really improv like a pro.  A non-expert might not be sufficiently interested to notice.

violinFor ‘unpickables’ (to use SImon’s term), expertise and rigour reveal themselves not imitation but in creation.  The expert surgeon does not exactly mimic his peers, nor does Itzhak Perlman imitate other violinists; they may learn and absorb what other experts do on a technical level, but from that understanding, they can create.  So the teacher of these subjects does not provide rigour through mere mechanics, but through fostering the learner’s innate understanding, challenging it, stretching it. Where Do We Go From Here?

It seems virtually impossible to separate a discussion of rigour from a discussion of expertise.  But it is possible for a non-expert in the field to keep a weather eye out for warning signs.

Artificial rigour tends to lean on the smoke and mirrors of a quick grind through the mechanical motions; this is a common feature of learning based on box-checking agendas.

Box-checking as a concept provides a bit of a compass star to to another indicator – the antithesis to box checking is  understanding, and understanding often reveals itself in meaningful narratives (as opposed to snake-oil style narratives; substance rather than a sales pitch).  Real  experts can create meaningful illustrations, applications, and narrative; false rigour  can only ape what it has heard or seen.

Another measure of artificial rigor is that it tends to make one “feel” good (accomplished, affirmed….).   It is very appealing. Real rigour requires hard work. A bit like climbing a mountain: it may be pleasing in a deep gut level, but it doesn’t come easily or quickly.

I would love to have found a simple check-list (you know, like the box-checking discussed above) to help a novice identify real rigour when they see it.   But then again maybe that’s the point: if you are a novice it’s time to start asking around and finding experts.

The best I can offer is an invitation to continue the discussion.  I still have a lot to learn.

October 13, 2010

By jleffron

Comments

No Comments

Posted In

Learning

“Houston, we have a challenge…” (problems, failure and innovation)

When did “Problem” become a dirty word?

In a business conversation, anyone who utters the “P” word is likely to be shut-down with the statement that “there are no ‘problems’ there are ‘challenges’”.

For those who are from a technical background (e.g. NASA engineers), problems are a good thing – saying there is a problem immediately implies that there is also a solution.   There may be ‘challenges’ faced on the way to reaching those solutions, but problems are something we can work on, something we can do something about.  We can solve a problem and possibly prevent a failure.

But then again, “failure” is another word you can no longer use is business. Failures are now “learning opportunities”.

Now, it is true that if the NASA team had not solved the problems (sorry, ‘challenges’) on Apollo 13, they would have had quite a learning opportunity.  As would have the crew; but the crew’s learning opportunity would have been very ‘short-lived’.

One problem with the constant relabeling of terms is that you cannot change the nature of the thing the terms stand for, and eventually the emotional baggage of the old term will apply to the new. (“If it quacks like a duck, looks like a duck…. it’s still a duck”.)   The flight crews of Challenger and Columbia would have suffered the same fate regardless whether you called those situations ‘catastrophic failures’ or ‘learning opportunities’.

Another problem with relabeling ties into the cultural source of relabeling as a concept.  People whose work has real, direct results have less problem using strong words.  By strong words I don’t mean the kind that would have gotten your mouth washed out with soap by mom, I mean the kind that will get you a dressing down by your supervisor (and possibly lead to professional ‘learning opportunities’ for you).

If your work has a direct, observable function (be that as a farmer, or as a NASA engineer) you are fine with calling a failure a ‘failure’.  The fact that you will learn from your failures is a given; it’s part of the job, so obvious that you need not mention it.  You call problems problems, and then you go solve them.   And you also know there are some problems and failures that are out of your control: there are unpredictable elements to life and work and you handle them as they come.

In much of the corporate world, though, people’s work is so far removed from the ultimate results, that it is possible (and perhaps even savvy, in a Machiavellian way) to avoid calling things what they are. But relabeling truths doesn’t alter the truths – they are still the elephants in the room; elephants that no one dares mention in an environment of fear and mistrust.  It seems the more euphemisms an organizations uses, the deeper the culture of fear and distrust, and the more paralyzed people are from actually taking action.  In a business where you cannot say ‘problem’ or ‘failure’, it is very clear that you are not allowed to have one, under any name.  It is also clear that there is no real desire to innovate or learn; the preferred approach is to sweep failures under the rug.  It more important to protect oneself from blame than to solve the problems.

This situation presents great losses in opportunity: when individuals and organizations face problems head on and learn from failures, that is where the real innovation happens.  Don’t believe me?  Ask the crew of Apollo 13 and the folks at NASA how much they learned, and how many solutions they innovated to bring that crew home.  And they did so because they faced up to the catastrophic failures on the craft, addressed the problems, and then set to work.  Heads were not going to roll for the failure, the only unacceptable action was not to try.

Innovation is the buzz-word of the day in businesses; if you want innovation to happen, call problems what they are.  And then go solve them.

 

Page 1 / 2