General Resources

Tip: Click on each link to expand and view the content. Click again to collapse.

Meeting Software

If you are starting from little knowledge of qualitative computing, please go to the resource pages for Handling Qualitative Data: Click here. Here, detailed answers are provided to the following topics:

Should you use qualitative software?

What's the state of the art?

Where to go for unbiased (and some biased) information?

Hunting down and talking to the Developer

You and software: managing this relationship

A quick online guide to Stepping into Software. Each of these sections discusses 'what to ask about your project?', and 'what needs to be done now?'

Step 1: Start your project

Step 2: Getting your data 'in'

Step 3: Storing information and characteristics of informants

Step 4: Edit and Link - to store what you see

Step 5: Start Coding and Use coding

Step 6: Make and manage the categories you need

Step 7: Say it in a diagram

Step 8: Ask questions about your data

Step 9: Showing patterns

Step 10: Out of software and into reports

Just as research purposes and questions fit with data types and analysis strategies, so do software tools fit, for better or worse, with all these aspects of qualitative research. Start there, setting out what you are asking, what data you expect to be handling, and by what methods of analysis, and then ask which of the tools available in software would best assist you.

In Readme First, four detailed tables in Chapter 4 summarize what you can expect of software, and the differences between products. As these show, there is a substantial common ground for basic functions, but there are also sharp differences in the ways software supports projects, handles data and ideas, coding and analysis. Details of particular products and links to the developers' websites are given in the regularly updated comparisons at the CAQDAS Networking Project website: http://www.surrey.ac.uk/sociology/research/researchcentres/caqdas/index.htm.

The Learner's Experience

    • Getting Started
    • The challenge will be greater if you are not confident with computers. Commonly, researchers working qualitatively have used computers only for writing or limited data base work. Can you  manage your operating system and do good housekeeping of your files and backups? Are you  competent in all the software you will be using, including your word-processing software? Can you format and edit well?  This will matter when you are preparing data records for handling in qualitative software. If you are unsure of your skills in managing computer files, build them up now.

      Next, do you have the luxury of being able to choose your qualitative software, or are you stuck with an institutional license?  If you have a choice, go to the tables in Chapter 4 to consider issues that will matter for your project. Then refer to the CAQDAS site, http://www.surrey.ac.uk/sociology/research/researchcentres/caqdas/
      index.htm
      , and references found there, for up to date comparisons to inform your choice.

      Right, you have your chosen package (or the one chosen for you) and must confront the challenge of learning it. You need basic competence in it from the start of research design so that the project can benefit from software from the start.

      In Chapter 12 of Readme First, the concluding advice to get you started in qualitative work comes under these headings:  start small, start safe and start soon, start with a research design and start skilled. These imperatives are very applicable to learning software!

      Start small!  Don't allow the data to build up before you learn software.  The researcher who comes to software with bulk data already collected will find it much harder to become skilled in the software and probably impossible to benefit from tools that required a different format or another approach.

      Start safe!  Make sure you have basic skills in the qualitative software before you commit your project to it. Work carefully with a very limited range of tools and a little data, ensuring you are comfortable with the ways you can achieve your methodological goals in this context. Check you can back up and retrieve without damaging the data before you include more.

      Start soon! There is no advantage in waiting to learn software.  This is a great use of the gaps in schedules that happen whilst you wait for permissions to work in a site, or ethical clearance for your project. If you know vaguely the design of your project and the nature of the data records, you can learn software tools using other data, and keep notes on what will work when you start in your own project.

      Start with a research design!  Software tools will make much more sense if you know what you are trying to do. The hills of research are alive with the wails from researchers who found a tool that would open up their analysis after they had input all their data in such a way that they couldn't use that tool.  Good software will include self-teaching materials, but the uses for your research of its particular features will not be self evident.

      Start skilled! Please don't assume you can learn your software as you go, or worse, that you can learn by clicking around.  You may not need workshop-style teaching, but don't assume self-teaching is best.  Often, like most crafts, qualitative software use is best learned in apprenticeship. Can you hang around researchers using the software and discuss, even critique, what they are getting from it?  Seek colleagues or helpers who have the software skills and experience you want and talk to them about your project and what you think you want from software. Find what software support is available in your institution and use it. Go to the software maker's website to find workshops, consultants, or virtual courses. Seek out Internet discussion lists devoted to the software, so you can pick up tips from other researchers and avoid their mistakes. Learning with others is usually far more productive and often faster and much more fun than learning alone.

    • What the trainers saw
    • The observations in this section are roughly gathered under areas of concern that emerged from the trainers' responses to my general questions. Some may surprise.

      Trainers' comments are in italics, and I have left them to speak for themselves. There are strong themes here of challenges for novices, despite the now decades of experience of software. If you experience a lack of fit between methods teaching and software training, you are not alone!  Each of the trainers expressed concerns about what one described as "the often senior qualitative researchers and supervisors who adamantly refuse to move out of their own methodological comfort zone to support their more technologically literate students".   As one trainer put it, "The very worst character is the lazy masters course [teacher] who knows a little about software, and tells his MA students to use a particular software for qda (because it will be 'better') but then provides no methodological support, practical training or adequate time for the use of software to be anything but a worry to the student. That makes me so mad. Those students are like desperate refugees from badly run courses."

      Comments are gathered under the following headings:

      • A personal note

        This section has a personal context. I was one of the first cohort of qualitative researchers turned software developers and had three decades' experience of writing about the methodological impact of software, preparing materials for users, training them to use software in their research and training trainers to do the training. In all that time, I felt that new researchers had little access to information about the impact of starting out in software, and how hugely it can affect the project. Amazingly, the widespread use of qualitative software doesn't seem to have removed these problems. Researchers I work with now seem just as uninformed and unready for software as those I helped in the 90s, even, perhaps, more so because there's no novelty or excitement to the challenge of using software tools – they know they have to use it, and just trudge into it.

        I've always felt that trainers were the best and most neglected source for the needed insight, and that those approaching software would be helped by hearing of lessons learned from training. We trainers live through and agonised over the muddles and mayhem and triumphs and glorious achievements of the researchers we try to help. The demand for training was and is huge, and the numbers of trainers worldwide astonishing, so the accumulated experience was massive. Of course we felt we could often see much more clearly than they where users' software tripped them or empowered them – just because so many projects had been helped by us (or gone past us). We had that awful déjà vu, that sinking knowledge when we met another project doomed to end up coding forever and the thrill when we found a newcomer sure-footedly exploring what the software could do for their purposes and able to learn from our teaching and apply it to their project. But the trainers' voices were not heard beyond a very few specialist conferences.

        Preparing this website, it seemed timely to collect some trainers' accounts of what they know will work or impede as novices start out in software. What follows draws on the views of just three very experienced trainers. This is not of course a representative sample of anything. To do a proper qualitative study of qualitative software trainers would be a most intriguing project, but not one I could tackle here (and perhaps, given my involvement, not a project for me.) So I selected just three from the extraordinary army of skilled software trainers round the world. These three were asked because their experience reaches across many years and many software products, and because they have worked collaboratively, so have some comparative insight into the experience. Two – Ann Lewins and Christina Silver – have for years been the backbone of the UK CAQDAS project, the definitive centre for information about qualitative computing: http://www.surrey.ac.uk/sociology/research/researchcentres/caqdas/index.htm  . Ann was its founding expert and Christina is now its manager. They co-authored the text, Using Software in Qualitative Research, A Step-by-Step Guide,  (London: Sage, 2007). The third voice is American.  Coming off ten years of using software for analyzing qualitative audiovisual data for a health-based research project, Jen Patashnick now works with Ann and Christina providing qualitative data analysis services. You can read about them and their publications at  http://www.qdaservices.com/.

        What follows are their collected comments. I've not attempted a qualitative analysis of these! They are insights offered to help you avoid traps and build a good basis for learning software well and using it skilfully. My hearty thanks to the three trainers who contributed.

      • Don't look to software for method

        These trainers agree – as do I -  that learners who expect to get qualitative method from learning software, or who approach software without method, are those most likely to meet problems.

        In an introductory training session where you tend to have participants with a wide range of backgrounds and expertise it becomes clear that those without the methodological grounding can struggle. That's not to say that qual methods knowledge 'needs' to come first. I think my opinion on this has changed somewhat over the years. We are now dealing with a generation of students and researchers who are generally much more computer savvy and are more likely to have an expectation of using computers to facilitate their analysis. You definitely see this difference when you have some who are more established qual researchers who are coming to software later in their careers. You can teach about qual methods at the same time as teaching a software, but that needs to be done with care. I am particularly reluctant to do so with undergraduates – just because unless you have sufficient time – which I am unfortunately rarely given with undergrads – there can be a tendency to see the software as the method. I have strong feelings about this and this is one of my concerns with respect to the development of software over time. I think the commercial context within which these packages now operate can be seen to have compounded this.

        When participants come to training with some idea of what they want/need out of the software – either generically or specifically - they tend to get most out of introductory training. This is often related to both their methodological knowledge and their level of computer confidence. Those that are methodologically well-versed and therefore know what they need to be able to do analytically look out for those aspects from the outset and are generally able to translate broad or abstract discussion of tools to their particular analytic needs. Those that are not afraid of the software – in  that [they] have experimented with software ahead of training also seem to 'get it' more quickly – often coming to intro training with questions about how a tool works, or what is the 'best' way to use it.

        At the other extreme is - Someone who doesn't understand what they're doing when they do any software process.  Coding is the big one.  If you don't understand what coding (or anything else, for that matter) is DOING, you ought not to be doing it.  I had a client once who called me for the first time after she'd coded all of her data.  She didn't know what to do next.  (This is pretty typical, by the way.  "Okay. I've coded; now what?"  But coding doesn't HAVE to be the first step all the time... )  However, when I sat down with her to look at her project, she had coded each sentence of her data to a different node – titled by the sentence.  (Don't you wish I were kidding?)  She had no clue what coding was, nor what the point of coding was, and her coding, of course, was pointless.

        The di Gregorio and Davidson book has done wonders in my view for providing individuals with the opportunity to prepare for the nuts and bolts of project work, with software in mind.  I think there's a lot of mend and make do with software, launching into it without really knowing what it's going to do – avoiding the difficult decisions about using software for organisational aspects of data management  because they are just too difficult or boring to bother with.  As we know that's fine for a while and we can delay some choices and actions.   The simpler the software in those terms, the easier it is to retrospectively fix those omissions or to fix them from different starting positions.  The more complicated the software in those respects the more difficult it is to perceive (and explain) what needs to be done.  Being able to explain how to use a software without going into 5 million conditional provisos about different starting points must be one of the yardsticks of good software design.

      • You are the researcher. Software won't do the analysis.

        Just as software won't teach you qualitative methods, it won't do the thinking and interpreting that these methods require.  Comments include concern that this expectation comes especially from researchers with quantitative backgrounds.

        I sometimes get to the end of the workshop and someone will come along and say to me... so the software  doesn't do it for you, does it? ... It's always there by implication but sometimes I do forget to say slowly, YOU DO ALL THE THINKING...    I picture them sitting there, throughout a 2 day workshop maybe, just waiting patiently for it all to click into place, waiting for that moment when I say – now just hit that red button and your analysis will come out of that tube over there.

        I think the biggest misconception is that the software will "do the analysis" for you.  I find myself repeating, "The software can't think for you. You have to do the thinking."  I also think people sometimes think that software is a shortcut for their analysis.  Now, it may well be that doing the analysis in a software package will take less time than doing it manually, but it's not going to do it itself and/or take no time and no effort on the part of the researcher.  It's still the researcher doing the analysis; it's just facilitated and organized in the software.  In rare cases, this sometimes makes the researcher think again about using the software at all because they feel it's just one more thing they have to learn when they can probably do it "easier" by hand without a learning curve.

        I also have issues with people who have a general misunderstanding of what software will do for them.  These are the folks who want to just "put the data through" the program and "see what comes out".  They are lacking a general understanding of what qualitative research is.  There is no software in the world that will provide that answer to them.

        Those who have a more quantitative background can sometimes expect things from qualitative software which are not possible – e.g. occasionally they assume that the software will 'do' the analysis for them. In extreme cases you can see their disappointment early on in this respect. This can also be the case for some students who are looking for a short-cut to analysis. Illustrating tools such as word frequency or coding resulting from text searches or auto-coding can often be overly seductive for these participants – you can sometimes see the 'excitement' these tools invoke! The same is true of those packages which allow the direct coding of audio or video tools – 'wow, I don't need to transcribe...!' In most cases, as a trainer, you can warn against the perils of these short-cuts – those being methodologically minded and/or keen to ensure their analysis is rigorous can see the limitations of such tools for certain types of data – but sometimes you can see that some will use these tools to the exclusion of careful reading of their data.

        I do feel less need generally to say 'the software doesn't do the analysis', but with certain types of participants it becomes evident during training that this is not always obvious to them. Conversely, with more sophisticated visual representations and more quantitative summaries of coding – as well as more ability to conduct mixed-methods analysis, you see that this is quite often an unexpected possibility.

        ...I get the feeling that some students are increasingly looking for ways of speeding up their analysis and the more recent trend amongst software to handle multi-media directly – although providing an important – and welcome - means by which to analyse non-verbal communication – may well be being used as a way to avoid the time-consuming process of transcription. I had a conversation with a PhD student recently in which I tried to emphasise that where the content of the interview discussion is the main aspect of interest it's difficult to side-step the need for transcription of some sort. The idea that transcription is in itself an analytic act, and a necessary one in terms of familiarising with the data, and that it needn't comprise a fully Jeffersonian format, was lost. Anecdotally I think more PhD students are farming out their transcription than used to be the case and this trend worries me somewhat. In funded projects this is often a necessity, although lots of work goes into checking transcripts and re-formatting them for the special requirements of particular software packages, but I tend to feel that students need to go through this process themselves – maybe that's just because I had to do it myself!!

        The single most annoying participant is the one who thinks that reading the data is for dummies. The only tools worth bothering with are the ones that enable you to count stuff and thereby (of course) remove any obligation to read the data. This is also usually the person that needs to communicate his really snazzy discoveries to all around him in the work shop...to free other participants from the overly weighty responsibilities of qda.

      • Have you tried doing it without software?

        Is it useful to have tried to handle qualitative data manually?  Compare with quantitative work:  nobody suggests to the survey researcher that they should use manual methods!  But qualitative researchers who have not tried manual methods may not understand the challenges of doing justice to unstructured data. My own experience suggests that those who know the frustrations of handling such data manually are more likely to become skilled and innovative users of the undoubtedly limited software methods.

        It's a great help – they don't come unless they are open to new possibilities but they can instantly see how their own approaches can be supported and enhanced. 

        Usually a help as they can see how the software makes manual processes more thorough. Only a hindrance when people are very wedded to a particular approach to coding – or want to analyse data without heavy reliance on coding tools. Those doing narrative-type analysis soften struggle with software that they perceive to be over-reductionist. Some tools are better than others in this respect, but it's not infrequently that people decide not to use software as they can't see how to maintain 'true' to their analytic approach.

        Interestingly, I think there's been a shift in this over the last 10+ years.  At the beginning (yes, my youthful beginning in the early 2000s), I think most people attempting to use qualitative software DID have some qualitative training.  That was great because I could tell all sorts of highlighters and cards jokes.  J  More recently, I find that people who are using software are doing so because their advisors tell them to (even though their advisors have no experience with software – although this works to the opposite too; some advisors tell their students NOT to use software or that the student doesn't need to because the student doesn't have enough data to warrant it) or because everything is computer-based so it's just a default.

        I think it's helpful if they have worked manually before because they can really appreciate the increased flexibility and power that a software package can offer.  I get a lot of hands hitting foreheads, or people who put their head down on the desk after I show some function, and groan, "Oh that would have saved me a MILLION YEARS' WORK during my dissertation!"  It's only a hindrance in someone who is also computer-phobic and therefore would rather just do things manually no matter what.

      • There's a choice of programs – did anyone tell you

        Did you explore the software available? Or did you see no choice? These trainers agreed that increasingly researchers are unquestioningly using the software provided by their institution – and most have not looked at options. How commonly do novices assume they will use a particular software package because it's provided by their institution or used by colleagues?

        All the time.  Although I have also been involved with several clients who have changed jobs and have brought whatever software they were using in the first job along with them to the second job and convinced colleagues to use that instead of what was pre-existing (mostly because the pre-existing software had no expert users or advocates on campus).

        Nowadays we don't see a great deal of 'choosing' any more. That's as much to do with the situations we find ourselves in terms of teaching of software as anything else. Once upon a time we put ourselves out there to inform and compare software – we would get a sense of what people were looking for and what appealed to them.  That varied according to their computer confidence levels and their project needs.  Now, when we teach software, people use the software available in the institution unquestioningly because it is so much more common for there to be a software in place– that's it – there's almost a disappointing sense of uncritical acceptance. That's probably how it ought to be,  given that we are usually there for a short moment of a research project's life ...the data are what matter and the software just a tool. Nevertheless we do try to encourage them to make a critical choice within the software about the bits which are going to be useful for them and their approach. 

        One thing of interest is that we have also seen recently  significant research organisations switching from the use of one established software to another  – this is interesting and it obviously happens because someone comes into the organisation with strong enough views about the differences between  software and is prepared to take the bull by the horns and make the change.  It won't happen at a university level, but for research groups or smaller institutions it does happen. 

        Related to this our feeling is that inevitably some software developers have reduced the accessibility of software.  Commercial imperatives to provide more and more tools (e.g.  mixed methods support) drive the expansion process beyond the easily absorbable.    Not that they were ever exactly easy – but there's a critical tipping point.  What happened gradually after the first not very user friendly software packages is that developers got better at simplifying – at subsuming steps – and making the difficult stuff palatable.  The very spread of functionality is reversing that process.

        If they are not already wedded to a software package, I usually suggest they go compare packages and read up on them to figure out what's going to work best for their data and the way their head works.  I very much believe that there are several excellent qualitative software packages out there, but that a large part of what will be "best" is determined by what makes sense for your own head.

        Most of them haven't [chosen], or, if they know there are options out there, one has been chosen for them because it's the one their university or workplace already owns, or because it's the one their mentor/advisor told them to use.

        I think people use the internet to attempt to explore their options, whether it's software or cars or toothbrushes.  Unfortunately, I think people also rely too heavily on the software manufacturer websites, all of which will tell you that their particular software is the best thing out there, of course.

      • Has it helped to have qualitative software normalized?

        I'm not convinced that software use is widely normalised. Yes there are many more users now, but it continues to surprise me how many researchers I come across who either are unaware of software or hold strong misconceptions about its functionality and role. This is more often than not based on forays with software many years ago, when things were indeed more restrictive.

        I think they are more resigned and less excited.  LOL.  They're not seeing it as a cool new tool; they're seeing it as one more thing they have to do/learn.  I hope by the time we're done working together, they are enjoying their time in the software, because I most certainly do!, but I get a lot of "my advisor said I have to use this so here I am".

      • Where to go next?

        I asked the trainers if they suggested novices use tutorials on the developer's website before starting with their own data. A mixed response.

        I don't usually suggest these, only because I feel that I can do a better job teaching them to use the software in a customized and hands-on fashion, much more responsively than a tutorial can.  I think the tutorials can be valuable as reminders or windows into other ways to perform certain tasks, but I don't find they necessarily provide enough context for folks to figure out why they might need to DO that thing that they're being shown.

        Yes, always. We emphasise that our opinions are provided in the context of broad comparisons and that ultimately the choice is up to the individual.

        Yes I do... sometimes I come across users at a workshop who have taught themselves quite well from such resources.  I admire them greatly – because it can take a lot of perseverance.   Nearly always though it will only be to a level which stops at seeing the bigger picture if there is one that needs to be seen.  By that I mean how things can fit together – the backwards and forwards of it.  Teaching in a f 2 f workshop inevitably has an element of linearity about it – but there are more opportunities for making those links and somehow providing  the 'escape routes'  maybe the more creative uses that software can be put to.

        Download stats indicate that the software reviews on the CAQDAS website are well used and we do frequently get emails thanking us for these resources. It's difficult, having written them, to comment on their utility in relation to other resources, but the software planning seminars we run continue to be very well attended. It also surprises me how many individual requests we get to help with deciding which package to use.

      • What predicts success?

        Although I am an advocate of seeing software as a project management tool and therefore think that the earlier people engage with a tool the better, in reality, those that have data collected and perceive themselves to be 'ready' to analyse them, tend to be the ones that start using the software immediately after training. We get the same people coming back on intro courses frequently because they didn't go away first time and use it straight away. Those that are convinced at the end of training of the virtues of using software are always going to be more motivated to experiment on their own and will get more out of it. That said, many come to intermediate/advanced training because they think they're not 'using it to its full potential'. There is a sense that you need to know everything to use it effectively – why is this? No-one knows everything about how MS Word or any other package works, but this doesn't stop them from using it effectively to write an essay, a letter, or a journal article. This speaks to a sense of the 'importance' of the analysis – and a fear of doing it wrong or using the tool badly. These sorts of things are common discussions at intermediate training. It never seems to go away. In many cases it's just a need to be assured they are doing ok, and most often they are. As a software trainer you are only ever a facilitator in this respect. You show them the sorts of things that are possible and you encourage them to try it out and ultimately, I feel I have done my job if I can encourage independent and critical use of software tools. 

        It seems to me that the folks who come away the most excited with the possibilities are the ones who are going to continue to use it successfully.  The ones who find that they really "get" the software, the ones who find it makes intuitive sense to them, those are the ones who are relaxed about playing with it, and truly enjoy their time using it – they will continue to use it.  The ones who are just using it because they have to get this one thing (project, dissertation, whatever) done and their boss/advisor/team is using it so they're forced into it, are not going to continue using software after they complete their short term goal.

Authors: Lyn Richards and Janice M. Morse

Pub Date: April 2012

Pages: 336

Learn more about this book