Qualitative content analysis according to Gläser and Laudel – simply summarized in 3 steps
In their book “Expert Interviews and Qualitative Content Analysis”, Jochen Gläser and Grit Laudel have presented their own and very special approach to qualitative content analysis. What is special about it is both the interest in the text and the preparation of the material, the so-called extraction. Here is my attempt to explain the basics of this process as compactly and simply as possible:
Not only for expert interviews
Preparation of the text through extraction
What can this look like in QDA software?
Special research interest
Von Gläser and Laudel are particularly interested in plot sequences, such as those found in biographical narratives. So progressions like:
“First this happened to me, then that happened because of it and then the following developed from that.”
Such causal chains are difficult to map in a hierarchical code system. In a hierarchical code system, such chains would quickly cause the code system to become huge and confusing. In order to maintain these causal chains, keywords on the dimensions of interest are noted for each text passage during the so-called extraction and then further work is carried out with this material.
Not only for expert interviews
I can’t help but moan a little about the book title here: The combination of “Expert Interviews and Qualitative Content Analysis” can unfortunately be misunderstood in two ways: a) When conducting expert interviews, you don’t necessarily have to work according to Gläser and Laudel. The process is very complex and (often too) complex, especially for newcomers to qualitative content analysis. However, I find it much more regrettable that b) the title can also be misunderstood to mean that the procedure is only applicable to expert interviews. It doesn’t seem that way to me. So please don’t read the link “AND” in the title as in “Romeo and Juliet”, but rather as in “Dog and Cat”. These are two independent topics in themselves: Material type and method. A more appropriate – albeit more cryptic – title would probably be:“Qualitative content analysis using extraction“. This is because extraction is the special work step that Gläser and Laudel have developed and which makes the process so special.
Preparation of the text through extraction
Abbreviated printout: Make a systematic note of the relevant information from each suitable passage and then continue working with it. Let me summarize this in 3 steps:
- Create categories and dimensions
- Identify passages in the text that help you answer the question and note down the relevant dimensions.
- Prepare the extracted text as a table.
1. form categories and dimensions
Content analysis by extraction works fundamentally with deductive or a-proiri categories. This means that you start by thinking about the possible influencing categories and their dimensions – ideally on the basis of a theoretical concept. Categories are something like overarching themes, for example “social relationships”, “technical possibilities”… etc… Of course, this always includes a small definition of what is meant by this and how the topic recognizes it in the text. Dimensions would then be the corresponding characteristics that interest you in all categories. For example: causes, effects, actors involved, time, different levels of content. Helpful image: I create a table with the dimensions as columns for each topic or category.
Category: Social relationship
Causes | Effects | Actors | Time | Relationship between the players | Subject matter | Text passage |
---|---|---|---|---|---|---|
2. identify text passages and note dimensions
For each relevant passage, write down the information contained in the passage in the appropriate table.
e.g.“Oh and then last year, because of my mother, I installed Whatsapp with a lot of problems to exchange vacation photos with her.” becomes:
Category: Social relationship
Causes | Effects | Actors | Time | Relationship between the players | Subject matter | Text passage |
---|---|---|---|---|---|---|
Installation Messenger | Exchange of photos | Mother | 2020 | Family | Joint vacation | l1, para. 3 |
Some fields can be left blank, and some text passages can also be sorted into several categories. The same text passage could also be classified in the “Technical problems” category, but with “Installing Whatsapp” as the subject level.
In this phase, inductive “loops” are also possible – as is usual with qualitative methods: If you realize while working with the material that important aspects were not considered in step 1, you go back one step and add or correct them.
3. use tables for case-based evaluation
Tables are thus obtained in which the aspects mentioned are always causally linked within a column. Each column thus provides me with a line of argument from the narrative of my interview partners. And since tables can be sorted in any order, I can later view an overview of all causal chains, e.g. for a particular interview or a specific topic.
The evaluation of the text passages prepared in this way is then carried out by compiling various tables. Gläser and Laudel, for example, create case descriptions for individual people and then, building on this, summarize the biographies of people from the same (sports) field. You can also sort the tables according to various criteria: For example, if you have included “Time” as a dimension, you can sort the table chronologically accordingly. And thus create a summary of the chronological sequence.
Gläser and Laudel provide some examples of concrete evaluation options, but at the same time remain very open and speak of a “multitude of tables”. This shows a great openness of application possibilities, but is very demanding, especially for beginners in qualitative content analysis.
What can this look like in QDA software
That is the real starting point for why I am writing this article. I am often asked how content analysis can be represented by extraction in f4analyse.
f4analyse can export results as a table and is therefore fundamentally compatible with qualitative content analysis using extraction. When working in f4analyse, the original text passage remains very closely linked to the extraction, which ensures a good reference back to the original text passage. However, creating a clear and suitable table is indeed a challenge. Functions from f4analyse and Word must be used for this. So, here we go, now it’s getting creative and exciting:
Collection and coding in f4analyse
I put a semicolon in front of each text name. Sounds strange, looks strange, we need this semicolon later to create nice tables in Word. Then I create the categories as codes. I note the definition, indicators and dimensions in the respective comment field. Then I read the text. If I come across a suitable passage, I mark it and write the description of the individual dimensions here. It is important that I always keep the same order and always separate the individual dimensions with a semicolon and also put a semicolon at the end. If a dimension is not mentioned, I have to write it down with a placeholder, e.g. “-“. I then code this complete extraction to the appropriate category. If a text passage matches two categories, I write two lines below each other and code them separately to the corresponding categories.
Preparation as a table in Word
After all text passages have been coded, I use the export to Word via the export button and “Codes and coding”. All extractions are now listed here in a Word document, sorted by topic. For a nice table, I select all passages of a code, click on“Insert – Table” in Word and select“Text in table “. Anyone who has ever wondered what the semicolons are all about will now experience an “aha” moment. In the menu that pops up, you can select the semicolon in the“Separate text” option.
And everything turns into a nice table. If necessary, you can use“search and replace” to remove the quotation marks. Admittedly, this table created by Word looks a bit squashed at first, you should set it to landscape format and tweak the spacing of the columns a little, . You can now also sort the content in Word via the“Layout” tab.
So much for the extraction, which provides a list of extractions in tabular form very close to the book.
For all those who are not afraid of working with Word macros: Grit Laudel also offers a macro for working directly in Word on her website, which Gläser and Laudel have probably also used in their publication.
Extraction “light”?
I personally avoid working with tables in Word whenever possible. Therefore, I pragmatically suggest carrying out the evaluation in f4analyse. You can proceed as above, with the difference that you do not export the results as a table, but view them directly in f4analyse. In the “Selection” tab, I can call up specific coded text passages and thus, for example, view all extractions from an interview on a specific topic. In the comment field below, I can create a case summary on this basis. First to one topic, then to the next.
Using the “Compare A/B” button, I can also call up extractions from two interviews side by side and thus compare them. I can record my findings in the comment field below. Or I combine different texts into groups and place them next to each other. Or I place two topics next to each other and compare the extractions. There are many ways to view the material in a targeted manner. Only sorting by specific dimensions is not possible in f4analyse.
Conclusion
Working with extraction according to Gläser and Laudel is definitely something for fans of tables. The preparation of the extraction in tabular form is unfortunately not trivial and requires a considerable amount of effort and appropriate formatting skills in Word. If you still want to do it as described, you can prepare the material in f4analyse accordingly and then export it as a table or summarize it in f4analyse, depending on your interests.