Você está na página 1de 51

Summary Business Research Methods

Chapter 1
o Business research: An organized, systematic, data-based, critical, objective,
scientific inquiry into a specific problem, undertaken with the purpose of
finding answers or solutions to it.
o Why do we need research? To broaden knowledge and to be able to make
sound, well informed decisions.
o Knowledge: the sum of what we know.

The essence of scientific research:


Objectivity
A clearly delineated problem statement
The role of theory (papers, previous research)
Methodology: replicable
Profound, solid analysis of the data (words or numbers)
Impersonal, formal report
Ethical issues in Business Research:
Are certain outcomes (politically) desirable?
Confidentiality, permission and privacy of respondents
Respondents may always know the goal of the research they
participate
Full disclosure on methods, data, and results
Non-significant results, contradictory results, and illogical results
Managers should know about research for: good decision making, effective
relationships between the managers and the consultant-researcher, and the
advantages and disadvantages of external and internal consultants.

Chapter 2
Business research classifications:
Fundamental vs. applied research
o

Induction: Draw a

on specific observations

o
o

Start
with: Theory
Tentative

Exploratory vs descriptive vs causal research


Exploratory
Focused on the development of theories and understanding
Qualitative
Little prior knowledge
Key question: Why?
Focused on theory generation
Descriptive: can be split up into
1. Counts and interrelations (more like bookkeeping) very practical;
little theory
2. Correlational research (the extent to which variables vary, more
academic)
Causal (work from a deductive approach)
Cause and effect relationships
There is theory and prior knowledge

Experimental vs non-experimental research


Experimental research: is causal in nature, to apply variation to a certain
phenomenon to test whether this effects another phenomenon
Non-experimental research: often correlational research, can be qualitative.
Aimed at discovering, describing and explaining

Inductive vs
deductive
research
general conclusion
that is based

generation
Observation Pattern
hypotheses Theory
Deduction: Test whether a general theory can solve or explain a specific
problem
Start with: Theory testing theory Tentative hypotheses Pattern
Observation
Both are used in scientific research
Most research is done quantitative, qualitative research is less common

Qualitative vs quantitative research


Qualitative: soft data like experiences, views, ideas, and motives
Narrative in nature
Often type out complete interviews
What type
Which category
Quantitative: hard data like numbers and amounts
How much
How many
How often
Most research is done quantitative, qualitative research is less common

Primary vs secondary research


Primary: Your own research
Self-collected data
Fully focused on answering the main problem statement
Flexible (generate own data)
Secondary: Based on research and publications of others
External data (already existing)
Less focused on answering the main problem statement
Inflexible (deal with data available)

A summary of classifications
Caution: Exploratory research can be quantitative in nature
Take this table as generalization, there are sometimes exceptions

Characteristics (hallmarks) of scientific research:


Purposiveness
You need a clear goal (objective, what you want to know)
Rigor: connotes carefulness, scrupulousness (= zorgvuldigheid) and the
degree of exactitude in research investigations.
Theoretical base & sound (= good) methodological design (e.g. use
literature to show why your research is necessary)
Testability: scientific research lends itself to testing logically developed
hypotheses to see whether or not the data support the educated
conjecture or hypotheses that are developed after a careful study of
the problem situation.

Based on obtained data


Replicability: the extent to which a re-study is made possible by the
provision of the design details of the study in the research report.
Clear and structured methodology
Precision and confidence:
Precision: the closeness of the findings to reality based on a sample.
Confidence: the probability that our estimations are correct.
Accurate and reliable
Objectivity: conclusions should be based on facts of the findings
derived from actual data, and not on our own subjective or emotional
values.
Only facts count
Generalizability: the scope of applicability of the research findings in
one organizational setting to other settings.
Results are applicable in different settings
Parsimony: Adoption of the simplest assumption in the formulation of a
theory or in the interpretation of data. Things should be made as
simple as possible but not any simpler. Try to focus on some issues
buy do not try to solve them all
The research process: how can I fully utilize the hallmarks?
Previous literature is clear on one thing: The research process is a
multi-stage process
However,
there is no pair of books that comes up with the same stages
The exact number of stages varies
Stages can be skipped or be performed simultaneously
Sometimes you have to go back a stage if new developments
occur
But the idea of a multi-stage process is useful!
Hypothetico-deductive method: developed by philosopher Karl Popper
- The only real scientific methodology:
1. Describe problem area (you should not be the only one
interested in solving it)
2. Formulate problem statement
3. Develop hypotheses
4. Determine measures
5. Collect data
6. Analyze data
7. Interpret data
- Testing hypotheses is by definition a deductive method!
Hypothetico-deductive method: Example

- Positivi
science and scientific research is seen as
way to get at the truth (there is an
objective truth out
there) to understand
the world well enough so that
we are able to
predict and control it. The
world operates
by laws of
cause and effect that
we can discern
if we use a scientific
approach to research.
Constructionism: the
world (as we know it) is
fundamentally
mental or mentally
constructed. They aim to understand
the rules
people
use to make
sense of
the world by investigating what happens in peoples mind.
Critical realism: combination of the belief in an external reality (an objective
truth) with the rejection of the claim that this external reality can be
objectively measured; observations will always be subject to interpretation.
Pragmatism: research on both objective, observable phenomena and
subjective meanings can produce useful knowledge, depending on the
research questions of the study. The focus is on practical, applied research
where different viewpoints on research and the subject under study are
helpful in solving a problem.
sm:
the

Lecture 1 Chapter 1 / Chapter 2 / Chapter 3


Where does a research project start?
o First of all: Driven by curiosity
o Within a business environment the decision problem is:
Driven by management
Action-oriented
The search for a topic
o Research: the process of finding solutions to a problem
o Topics in applied research are management issues
Related to the concerns, issues, conflicts, or beliefs about daily work
Problem: a particular question or issue, bottleneck
Gap between actual and desired situation
Preliminary research
Organization/Context
Existing literature Existing knowledge
Helps you to:
Further delineate the problem

To sharpen the problem statement


To formulate the (research)questions
Determine the relevance of the problem
Decide what type of research fits the problem
Starting point: Research Proposal
Goal is to make clear why you conduct research, what is under
investigation and how you will conduct your research
FUNCTION ~ Organizing your ideas & convincing your reader(s)
A. Title:
There is room for creativity,
BUT:
The title should reflect the content of your research
It may be that a subtitle is necessary.
For example: More women on the Board!
B. Background/motivation:
What is the problem?
External developments?
At a consumer level?
At a competitor level?
The direct environment?
Internal developments?
Targets?
Strategic plan(s)?
New manager?
Employees?
Building blocks background section
C. Problem statement: consists of two parts
Research objective:
What is the aim of the research?
Related to the WHY of the research
Examples:
To establish the determinants of employee involvement
To understand the causes of employee absence
Main research question (= problem statement in several cases).
Related to the WHAT of the research.
Types:
Type
Used in academic
Example
environment?
Descripti
Sometimes
What are the most important differences between
ve
editions of?
Explanato Often
What is the cause of?
ry
Verifying
Sometimes
Can the function of... be improved by?
Advisory
Not used
Which action need to be taken to?
Prescripti Not used
Which procedure should be followed in order to?
ve
A good problem statement should
1) Ask about the relationship between 2 or more variables

2) Be stated clearly and unambiguous, usually in question form


3) Be amenable to empirical testing
4) Not represent a moral or ethical position
Criteria for a good problem statement
1. Relate at least 2 variables
How are A and B related?
How is A related to B under the condition of C?
What is the difference in A and B under different conditions of C?
2a. Clear & unambiguous
Avoid ambiguity
How can a company improve its efficiency/effectiveness?
What do you mean by efficiency or effectiveness?
What is the attitude of Japan with respect tot he western world?
How are Japan or western world specified?
Are people using Internet more?
Compared to when?!
Be focused
Why do our students underperform?
Why do our IBA students underperform?
Be relevant (especially within the context of your research goal)
Research objective could be: To understand differences in academic performance
Research problem could be: What is the influence of self-esteem on the academic
performances of first year students?
2b. Usually in question form
Avoid yes or no questions
Preferably in an open-ended question form
Avoid starting your question with words like how or to what extent
Are there any prejudices against hooligans?
Which factors determine the general attitude towards hooligans?
3. Amenable to empirical testing
Possibility to answer the question
Wrong questions are:
Is there a reason why people use the internet?
How much does a kiss weigh?
Feasibility
Necessary competences
Pertinent data
Financial resources
Time
Discipline
4. No moral or ethical judgments
Avoid personal judgments
Avoid value-laden words and terms

Avoid unnecessary jargon or overly complex sentence constructions


Finally: Some good examples:
- Which characteristics determine customer buying intention of the iPhone
6?
- What is the effect of the design of the website of company X on the click
through ration of the links the website?
- Which factors influence Perceived Service Quality (PSQ)?
D. Research questions:
Theoretical research questions: How can theory give us insights into our
problem?
Practical research questions: How can we use the results to solve our
problem?
E. Demarcation:
Population: Which factors influence the switching intentions of young adults in
the insurance industry?
Variables: Although there are many factors that have an influence on
customer satisfaction, this study only takes the behavior of sales people into
account.
F. Relevance/contribution (theoretical AND managerial)
What is the contribution of the study?
1. What is the contribution of the study from an academic perspective?
Nothing is known about a topic
Much is known about a topic, but the knowledge is scattered and not
integrated
Much research is available, but the results are (partly) contradictory
Established relationships do not hold in certain situations
2. What is the contribution of the study from a managerial (practical)
perspective?
It relates to a problem that currently exists in a company
It relates to an area that a manager believes needs to be improved

G. Method:
How to collect the data? Desk/field research, Experiment, Survey, Interviews
or Observation.
H. Planning:
Make a scheme when you are going to do what and when it will be finished.
I. References (sources)
List all the sources you use
Be aware of the difference between quoting, paraphrasing and summarizing!
Make use of APA.

Lecture 3 - Chapter 6
Research process for each type of research

Exploratory Research
- Undertaken when not much is known about the situation at hand, or no
information is available on how similar problems or research issues have
been solved in the past. Extensive preliminary work need to be done to
understand what is occurring, assess the magnitude of the problem,
and/or gain familiarity with the phenomena in the situation.

Methods
Existing data: secondary data

Observation: primary data


Control (artificial vs. natural environment)
Group membership (participant vs. non-participant)
Structure (focus, predetermined, systematic, etc.)
Concealment (do they know they are observed?)
Ethnographic research: primary data
Engage with a specific culture
Normally performed by well-trained sociologists and anthropologists
Goal is to directly observe behavior and to experience the ins & outs of
everyday life of the specific ethnical group.
Can lead to useful insights, but because of its difficulty to implement it
normally doesnt lead to new insights.
People adapt their behavior (Hawthorne Effect)
Focus Groups: primary data, discussion with 8 to 10 participants about
specific subject. Participants chosen based on their expertise.
Goal is to gain expressions, interpretations, and opinions when people
talk about a certain happening, concept, or product/service. Mostly
results in rich and versatile data, but possibly misleading as well
Possibly misleading data as a result of:
- Conformity (Asch, 1954): do and/or think what the majority does/thinks (=
normative social influence)
- Obedience (Milgram, 1962): the willingness of a participant to obey other
people in a higher position of authority even if this violates the
participants own morals and ethics.
Over 50 years later:
Experiment replicated in multiple countries
Full-obedience rates vary across studies, but are stable across all
countries
No difference between men and women, however women show more
signs of emotional conflict.
No signs of learning, the obedience rates havent changed over the
last 50 years.
In-depth interviews: primary data, obtaining data in a (non)structured way
by asking people questions about a certain topic.
Ethnographic research and focus groups are examples, but many other
types.
Important factors: ability interviewer, interview setting, and motivation
interviewee.
Interviews: Means-End Chain = For what reasons do people display a certain
behavior.
Why is that important to you? (Laddering Technique)
Techniques to find answers: Upwards laddering, Downwards laddering, Side
wards laddering, Negative laddering, Third-person probing, Redirecting, Evoke
situational context, Postulate absence.
Self-reporting studies (= interviews) reliable?
Respondents lack of self-insight.
o Unaware of existence of one or more stimuli
o Unaware of existence of certain behavior patterns

Unaware of relationship between stimuli and a specific behavior


pattern
Answers are prone to many kinds of response bias.
o Social desirability
o Actor-observer behavior
o Self-serving bias
o

Projective techniques: primary data, to capture ideas and thoughts that are
hard to articulate or play a role unconsciously (motivational research)
A variety of techniques, like the word association test, thematic
apperception test, animal metaphor test, and the inkblot test.
Conclusions are drawn based on the interpretation of the answers. Has
its origin in psychology/psychoanalysis.
Associative techniques: primary data
Implicit Association Test: a cognitive response technique designed to detect
the strength of associations between mental representations of objects in
memory. The test is often applied to measure unconscious prejudices, or
implicit preferences with respect to societal groups.
IAT reliable?
IAT measures cultural conceptions rather than individual ideas.
Effect size and directions are strongly dependent on stimuli and labels.
IAT-effects are prone to order-effects
IAT tests could easily be influenced by other (external) factors; reading
a short introduction story can already influence the effects
Descriptive study: designed to collect data that describe the characteristics of
persons, events, or situations. Either quantitative of qualitative in nature.
Final remark:
o Time & effort vs number of useful insights
o Representativeness and reliability
o Use of mixed methods: use of multiple types of research within 1 study
Descriptive Research

Methods

Surveys(questionnaires): A survey is a system to obtain information in


order to describe, compare, and explain knowledge, attitudes, and behavior
from people. You can obtain qualitative and quantitative data. Descriptive
research with survey means the collection of quantitative data with a
questionnaire, which helps the researcher to describe the cohesion between
variables.
Cross-sectional: everything is measured one moment in time
Longitudinal: measurements take place multiple moments in time
Positioning studies: A survey to discover relational positions
Example: perceptual mapping through multidimensional scaling
Segmentation studies: Segmentation is making distinction between
groups. By means of existing data or a questionnaire on:
Demographics: age, income, gender, marital status, social group,
household size, etc.
Psychographics: lifestyle, values, personality, etc.
Behavior: when, what, how often, to what extent, with whom, where,
etc.
Decision making: own choice or groups choice, low or high
involvement, attitude, knowledge, etc.
Database research: A collection of data, publicly available or retrieved from
a data bank, that is analyzed to make predictions or to test hypotheses.
Examples: COMPUSTAT, Thompson Reuters, Bankscop
(https://libsearch.uvt.nl/nl/ecogroup.html)
Or data obtained from a company or another researcher.
By all means: trustworthiness & quality data and data source!
Causal Research
- The researcher is interested in delineating one or more factors that are

causing the problem.


Good to know
Causality: the presence of a cause and effect relationship.
A causal model (boxes and arrows) is a structured representation of the
relationship between variables

Variables are: everything that can take on different values

There is only causality when 4 conditions are met:


1) Presence of correlation: The variables considered are somehow connected
with each other, there is a certain cohesion between them. This correlation
can be either positive or negative.
2) Chronological order of events: X (independent variable) precedes Y
(dependent variable). This order is of importance because we want to
manipulate X in order to cause a change in Y.
3) Control for other causes: There is no variable Z explaining both X and Y. There
is no variable Z that explains the effect of X on Y.
Walter: Experiment 1:
Hourly wage of women --> 10 boxes of candy an hour
Piece wage of men --> 6 boxes of candy an hour
Conclusion: strong effect of type of reward on labor productivity
Jesse: Experiment 2:
Hourly wage of men --> 8 boxes of candy an hour
Piece wage of women --> 8 boxes of candy an hour
Conclusion: type of reward has no effect
4) A logical explanation: Rational reasons can be formulated that form a basis for
assuming that an expected cause-effect relationship exists within the
organization
Pitfall 1: using authorities
Hans says so, so it must be true! | He is in the papers!
Pitfall 2: the selection of convenient examples/results
Choosing Walter (2007) and not Jesse (2010)
A causal model (boxes and arrows) is a structured representation of the
relationship between variables.
Variables: everything that can take on different values
Methods
Experimental research
Experiment: a data collection method in which one or more IV are
manipulated to measure the effect of this manipulation on the DV, whilst
controlling for other causes.
Internal validity: the extent to which the found cause-effect relationship
is reliable.
External validity: the extent to which the found cause-effect
relationship can be generalized across other situations, people, and/or
events.

Lecture 4 - Chapter 3 / Chapter 4 / Chapter 5

Formulating the problem statement


Decision Problem

Problem statement
Manager-focused
Research-focused
Action-oriented
Information-oriented
Problem statement:
Typically 1 per study.
Derived from problem indication / background.
Combination of the main research question (WHAT) and research
objective (WHY).
Often explanatory in nature.
A good problem statement should:
1) Ask about the relationship between 2 or more variables
2) Be stated clearly and unambiguous, usually in question form
3) Be amenable to empirical testing
4) Not represent a moral or ethical position
Doing research = thinking in terms of variables
Variable: Everything that can take on different values
o Differences between people/objects on a particular point in time
Cross-sectional research
o Differences between several points in time for the same
person/object
Longitudinal research
By means of scientific research we want to:
Explain a certain phenomenon (dependent variable).
We can do this by including independent, mediating and/or moderating
variables in our analysis.

Types of variables
Dependent variable (Y): variable of primary interest to the researcher.
The phenomenon you are trying to understand, explain and/or predict.
o Examples:
- What is the effect of different types of rewarding systems on labor
productivity?
- How does the addition of an online channel by the manufacturer influence
trust among retailers?
o Knowledge of the dependent variable helps us to solve the problem
we have indicated at the beginning of the study.
Independent variable (X): one that influences the dependent variable in
either a positive or negative way.
o Variation in the dependent variable is explained by variation in the
dependent variable(s)
o Example: How does intelligence (X) influence the exam grade (Y)? To
what extent does work atmosphere (X) influence job satisfaction (Y)?
Do women drive a car better than men?
o There is a cause-and-effect relationship between the independent
variable and the dependent variable.

Moderating variable: has a strong effect on the independent variabledependent variable relationship. A variable that alters
(strengthens/weakens) the original relationship between the independent
and the dependent variable
o The correlation between X and Y changes when the moderator takes
on a different value.
o A moderating variable allows us to:
Model the effect of demographic variables on the relationship
between the IV and the DV (for example: gender)
Model the effect of situational characteristics on the
relationship between the IV and the DV (for example:
motivation)

Imagine: We think this relationship is not the same for each respondent. We think
that women learn faster how to parallel park a car compared to men. I.O.W.: we
think that there is a different relationship between the number of hours of driving
lessons and the ability to parallel park a car for men compared to women.
What does this mean for the relationship between # hours driving lessons and
the ability to park a car?
How do we visualize this in our conceptual model?

Intervening variable / Mediating variable: surfaces between the time


the independent variables start operating to influence the dependent
variable and the time their impact is felt on it. There is thus a temporal (=
tijdelijk) quality or time dimension to the mediating variable.
Helps us to clarify why the independent variable has an influence on
the dependent variable
Gives insight in the underlying process

Hypotheses: are the assumptions about the relationships between these


variables.
Dilemma: which independent variables do we consider in our study?
The exclusion of relevant variables leads to:
o Lower model fit (r squared).
o Omitted variable bias (serious problem!)
The inclusion of irrelevant variables leads to:
o A less parsimonious model
o A reduction of the precision with which the effects of relevant
variables are estimated (standard error estimations )
What is the Omitted variable bias?
Omitted variable bias: The lack of important variables in the model
o Consequence: Biased parameter estimates in the model.
o Consideration: Add wrong variables to the model?
o It is better to add wrong variables to the model than leaving
good variables out
o Nothing beats good theory
o Negative effects of the omitted variable bias can be reduced by
careful literature review
Moderator VS. Mediator
A moderator is a qualitative (e.g., sex,
In general, a given variable may be
race, class) or quantitative (e.g., level of said to function as a mediator to the
reward) variable that affects the
extent that it accounts for the relation
direction and/or strength of the relation
between the predictor and the criterion
between an independent or predictor
variables
variable and a dependent or criterion
variable
To when or for whom does X cause an
To whom or why does X cause an effect
effect in Y?
in Y?

A certain variable can take on the role of a moderator as well as a mediator. This
depends on the theory that is tested!
Quasi moderation
Pure Moderation
Full Mediation
Partial Mediation
Mo moderates the
Mo moderates the
X only has an
X has an indirect
relationship
relationship
effect on Y
effect on Y through
between X and Y,
between X and Y,
through Me (=
Me, but also a
but also has a
but it has no direct
indirect Only
direct effect on Y (=
direct effect on Y
effect on Y
Mediation)
Either
Complementary OR
Competitive
Mediation)

Difference Full- & partial Mediation


o Complementary Mediation: Mediated effect (a*b) and direct effect c both exist
and point at the same direction.
o Competitive Mediation: Mediated effect (a*b) and direct effect c both exist and
point in opposite directions
o Indirect-Only Mediation: Mediated effect (a*b) exists, but not direct effect
o Direct-Only Non-Mediation. Direct effect c exists, but no significant indirect
effect a*b
o No-Effect Non-Mediation. Neither direct nor indirect effect exists.

By means of scientific research we want to:


Explain a certain phenomenon (dependent variable)
We can do this by including independent, mediating and/or moderating
variables in our analysis
Use theory to choose which variables are included
Draw the conceptual model based on existing theory supplemented with own
ideas:
A model of how one theorizes or makes logical sense of the relationships among
the several variables that have been identified as important to the problem.
A critical literature review:
A literature study is a step-by-step process in which published as well as nonpublished research with respect to a certain topic is acknowledged, critically
evaluated in relation to the problem under investigation, and finally documented as
clearly as possible.
Why a critical literature review?
To obtain insights on a specific topic
What do experts say about a topic?
Specify research questions and put into context.
NOTE: Statements and claims in a literature review need to be supported
by proof (represented by references) To increase credibility
Goals of the critical literature review:
To position your current research relative to already existing research in order
to build upon the latter.
Prevents the risk of reinventing the wheel
Forces the researcher to look at a problem from different perspectives
It becomes more evident which variables are important.
A guideline for formulation of definitions and other terminology.
Clarifies how certain variables are expected to relate to each other.
It improves the testability and the replicability of your current research.

The results of your current research are linked to already existing knowledge.

Existing literature
Concepts, instruments, conceptual frameworks, theories (logical reasons behind
certain phenomena), outcomes of scientific research, etc.
Helps you to understand as well as develop your own thoughts about a topic
or problem
Different types of literature:
o Primary literature
o Secondary literature
o Tertiary literature
Quality even within primary literature differs to a large extent
o Theoretical framework:
Conceptual background: What do we already know about this topic?
Overview of literature on which you built further.
Decent amount of literature: deductive research project, theoretical
framework
Deductive research: from a general theory to specific observations
Represents your beliefs on how certain phenomena (or variables or concepts)
are related to each other and an explanation of why you believe that these
variables are related to each other (Sekaran & Bougie).
Central elements:
o Identify and name relevant variables
o How do phenomena relate to each other?
conceptual (causal) model and hypothesis
o Why do phenomena relate to each other?
theoretical underpinning
Definitions of variables: literature study
Relevant variables in Theoretical Framework and ideas about relationships
between these variables can be identified with introspection, interviews,
observations, literature study.
Models
Your representation of reality: it helps you to express your ideas about how
the world works. A way to stimulate discussions about how the world works
A classification of models:
Descriptive model
Use of blocks, arrows, concepts.
Example: Hierarchy of effects model, Lavidge and Steiner.
Descriptive marketing model that helps to set goals in communication.

Causal models (correlational/causal research): we try to understand


something or, even better, explain or predict something.
Blocks, arrow, and concepts

cause-and-effect relationship between variables. In other words:


variation in labour productivity.
Testing the model: Measurement, data collection in the form of numbers
Question/Problem: What is het effect of different types of reward
systems on labour productivity?
The idea: piece wage has a positive effect on labour productivity
(compared to an hourly wage)

The basis of theoretical frameworks: hypotheses


Definition: A logically conjectured relationship between two or more variables
expressed in the form of a testable statement. (A leads to B)
Describes relationships between independent, moderating, mediating, and
dependent variables.
By testing the hypothesis a solution to a problem can be found.
Theoretical framework consists of:
Variables (building blocks) including definitions
Hypotheses relationships between variables (cement)
A graphical representation of this (construction drawing) = conceptual
model which is complete! (do not name new variables later on)
Every variable in the drawing is defined. Relationships have a logical
justification / scientific backing. Based on previous research, theory,
and own reasoning.
Example conceptual model:
Work stress is caused by:
Conflicts at the
workplace
job uncertainty
organizational issues
working at a fast
rate/working speed
The effect of working
speed can be
reduced by control
options
Work stress can lead to absenteeism
Hypotheses
A logical expected relationship between two or more variables in the form of a
testable statement.
Criteria:
State relationship between variables
Consistent with literature/theory/model

Testable (measurable variables)


Unambiguous, clear, and grammatically well-formed
Null hypothesis:
o The outcome you get by coincidence
Alternative hypothesis:
o Research hypothesis (what you expect to happen)
o One-sided (directional) (interpretation of significance)
o Two-sided (non-directional)
Good examples:
H0: There is no difference between the likelihood of buying an Iphone
between different income groups.
Ha: There is a difference between the likelihood of buying an Iphone
between different income groups. -> Researchers only formulate Ha
and call this H1.
Ha: Trustworthiness has a positive influence on perceived service
quality
H0: Trustworthiness has no or a negative influence on perceived
service quality
Argumentation:
Why is important. Built your hypothesis on logical argumentation to
clarify why you expect a certain relationship to exist.
Wrong: a well-established writer in the field has said X so it must
be true.
Why does service encounter satisfaction lead to loyalty? Not because
William (1932) says so. So why then?
Satisfaction = fulfillment of needs.
We make use of services to satisfy our needs.
We go to a bar (service) to enjoy ourselves (need).
If we indeed have fun at this bar this will increase the possibility
that we will go to the same bar the next time (if the same need
arises)
Lecture 5 Chapter 6 / Chapter 13
Problem definition
From decision problem to problem statement to research questions to hypotheses.
Thinking in terms of variables, which are explained in a theoretical framework.
Research design
Refers to the overall strategy that you choose to integrate the different components
of the study in a coherent and logical way, thereby, ensuring you will effectively
address the research problem.
The problem research design relationship
Research problem = Conceptual
Research design = Empirical
NOTE 1: Within the constraints put on the researcher

NOTE 2: Without neglecting the design problem


Components of a research design (Cook & Campbell, 1979)
Research methodology plus research strategy
Choice of research tools
Choice of statistical techniques
Sampling design
Research design involves making choices

Research tools
Definition: All research materials that are necessary to collect the needed
information for your study.
Choice of a specific research method and/or mode
Construct research instrument(s)
Develop procedures (working plan / guidelines)
Research methodology & strategy VS Research tools
o Focus
Research methodology & strategy: End-product
Research tools: Specific steps in the research process
o Point of departure:
Research methodology & strategy: Research problem
Research tools: Specific data collection tasks
Analyzing the collected data
Raw data means nothing to researchers without the proper tools to analyze and
interpret that data
Therefore we use statistical methods and/or techniques.
Statistical calculations: Descriptive and Inferential
Descriptive:
Measures of central tendency
Measures of variability
Measures of relative position
Measures of relationship
Inferential:
Difference between means
Analysis of (co)variance
Correlation methods
Chi-square test (goodness of fit versus independence)
Regression analysis
Choice of a statistical technique is complex! It depends on multiple factors including
Number of variables, distinctions between independent and dependent variables,
comparing groups against benchmark(s), measurement level
Data levels and measurements

Data levels and measurement

Unit of analysis
Individuals
Dyads
Groups
Organizations

What is a population?
Population: entire group of people, events, or things of interest that the
researcher wishes to investigate.
Purpose of conducting research: collecting information about characteristics
of the population.
From a population to a sample
Sample: subset/selection of the population OR a smaller (but hopefully
representative) collection of units from a population used to determine truths about
that population.
Why do we need a sample?
Costs of a sample < costs of population
Quantity of data should be manageable
Often, the entire population is not available
Sample statistics are often used to make inferences about population
parameters
Estimates: e.g., based on the sample mean/fraction you make
inferences about the (true) population mean/fraction
Over- and underestimation possible!
Sampling: Procedure where a given number of members from a defined population
are selected as representative subjects of that population.

Classification sampling techniques


Nonprobability sampling
Probability sampling
The probability of selection of each Each sampling unit has a known, non-zero
sampling unit is unknown.
chance of being selected.
Data cannot be used to make
Unbiased, random selection of the sampling
inferences about the population.
units. Proper representation of the target
population. Data can be used to make
inferences about the population.
Convenien Judgment
Quota
Simple System
Stratifi Cluster Area
ce
al
sampling rando
atic
ed
sampli sampli
sampling
sampling
m
samplin sampli ng
ng
sampli g
ng
ng
Convenience: shopping street
Obtaining participants conveniently with no requirements whatsoever
Quota: shopping street + selection
Select on the basis of specific criteria (e.g., gender, customer card
yes/no)
Judgmental: interviewing experts
Study of labor problems: people who have experienced on-the job
discrimination
Simple Random Sampling (SRS): Student in the FEB database: lottery
Systematic sampling: Each 5th student in the FEB database
Stratified sampling: Divide the population in homogenous groups and then
apply SRS within each subgroup
Cluster sampling: 1 street. Divide the population in heterogeneous groups,
randomly select a number of groups and select each member within these
groups
Area sampling: specific type of cluster sampling in which clusters consist of
geographic areas

Sample size
How many respondents do you need?
Factors influencing sample size:
Type of research, number of variables of interest, number of
(sub)groups in the population, desired precision, the analysis technique
and its underlying assumptions, resources, etc.,
Formulas versus rules of thumb
Examples rules of thumb:
o Sample > 50, < 500
o Multivariate research: 10/15 times number of variables
o Subsamples (male/female) minimal sample size of 30 for each
category is necessary
Quantitative researchers seek statistical validity
Qualitative researchers seek saturation
Sampling challenges
Systematic error: A difference between the sample and the population that is
due to a systematic difference between the two rather than random chance
alone.
Response rate problem
Coverage error
Characteristics of a good research design

Objectivity
Generalization
Reliability
Validity
Adequate information

Lecture 6 - Chapter 7 / Chapter 8


Central issue
Is qualitative research better than quantitative research or vice versa?

What is qualitative research?


A situated activity that locates the observer in the world. It consists of a set of
interpretive, material practices that make the world visible. These practices
transform the world. They turn the world into a series of representations, including
field notes, interviews, conversations, photographs, recordings, and memos to the
self. At this level, qualitative research involves an interpretive, naturalistic approach
to the world. This means that qualitative research study things in their natural
settings, attempting to make sense of, or interpret phenomena in terms of meanings
people bring to them.
Typical characteristics:
Naturalistic setting
Researcher is engaged and a key instrument
Multiple sources of data
Participants meanings
Emergent design
Interpretive inquiry
Inductive analytical approach
Holistic account
Fundamental characteristics:
Are open-ended
Can be concrete and vivid
Are often rich and nuanced
When to use qualitative research? (Graebner et al., 2012)
To build new theory when prior theory is absent, underdeveloped, or
flawed
To capture individuals lived experiences and interpretations
To understand complex process issues

To illustrate an abstract idea


To examine narratives, discourse, or other linguistic phenomena

Research Design Choices

Again: the purpose of the study determines the approach

Main qualitative data collection methods:


Individual interviews:
o Qualitative interviewing is based on conversation(s), with the emphasis on
researchers asking questions and listening, and respondents answering.
o Purpose: derive interpretations, not facts or laws, from respondent talk
o Why using interviews and not questionnaires?
o Flexibility:
You can ask probing and/or verifying questions, and for
clarifications
You can challenge your respondent or find out why people do not
answer
You can improve your interview script for the next conversation.
You can find out why people do not answer certain questions
o Exploration:
You can explore a topic of which little is known or that is highly
complex.
You can discover that certain previously overlooked themes are
important.
o Learning to know your respondent:
You can build trust, which could lead to better access to sensitive
information.
You will get context information (e.g., how is the respondent
dressed).
You know who is answering the questions.

Degree of structure
Structured - A method of data collection using a questionnaire in
which each person is asked the same set of questions in the same
order by an interviewer who records the responses.

Semi-structured - A method of data collection in which the


interviewer asks about a set of themes using some predetermined
questions, but varies the order in which the themes are covered
and questions asked, and may choose to omit some topics and ask
additional questions as appropriate.
Unstructured / in-depth - A method of data collection in which the
participant talks openly and widely about the topic with as little
direction from the interviewer as possible.

The research process of individual interview inquiry involves 7 steps:


Step I: Thematizing
o Define the topic to be investigated.
What is the problem?
What is the researchers level of knowledge?
o Pose research questions
o Identify themes to focus on in the interviews (topic list).
Based on your conceptual background.
Connected with the situation of the respondent.
o Determine which level of information to supply to interviewees.
Introduce yourself as a researcher.
Clarify the purpose of the interviews.
Step II: Designing
Construct the interview guide.
Approaches to data recording notes and tape-recording.
Approaches to questioning.
Appropriate location and appearance
Cultural differences and bias(es)

Make decisions on your sample:
Who to include?
Sample size?

Step III: Interviewing


Shape the interview opening
comments
Show appropriate behavior:
Verbal
Non-verbal
Summarize
Structure
Validity
o Demonstrate listening skills
o Situational biases:
Nonparticipants
Different interviewers
each with different
trust levels
Physical setting of the interview

Step IV: Transcribing


Literally write down questions and answers
In the language in which the interview was conducted
This process takes time: One hour of interviewing means at least 4
hours of transcribing!
o Principles for developing transcribing rules:
Preserve the morphologic naturalness of transcription.
Preserve the naturalness of the transcript structure.
The transcript should be an exact reproduction.
The transcription rules should be universal and complete.
The transcription rules should be independent
The transcription rules should be intellectually elegant.
Step V & VI: Analyzing and verifying
Systematic and rigorous process
Step 1: Coding (open or closed) grouping into categories
(systematic organization + first step of analysis)
Developing categories
Attaching text segments to these categories
Step 2: Identifying themes and patterns
Step 3: Using theory to make sense and conceptualize
Step 4: Writing up the analysis representing themes & patterns

Computer Aided Text Analysis: Any technique involving the use of


computer software for systematically and objectively identifying
specie characteristics within text in order to draw inferences from
text
Step VII: Reporting
A clear, strong storyline / argumentation / explanation delineating
core concepts and their relationships
Components:
o Empirical description of themes and patterns
o Quotes from interviews
o Theoretical conceptualization

Focus groups: eight to ten members with a moderator leading the discussion for
about two hours on a particular topic, content, or product.
Data collection methods A comparison

Sampling in qualitative research


Most common sampling techniques:
Purposive sampling:
o Judgment sampling
o Quota sampling
Snowball sampling
Focused on theoretical generalization statistical generalization

Evaluating your qualitative research design


Reliability: consistency of findings, the extent to which similar observations
can be made by other researchers.

Making the production of data more transparent: Established by auditing of the


research data, goals, methods and decisions been made. It is an exercise of
reflexivity about the process of research.
Internal validity: explication and justification of research inferences.

Subjecting the research material to validity checks (interviewee validation, peer


debriefing, negative case analysis, reflexivity, triangulation, persistent observation,
etcetera)
Interpretive validity: accurate portrayal of participants meanings.
External validity: degree to which findings can be generalized to other people,
context, places, times, settings.

Ensuring provision of rich and detailed descriptions of context allowing the reader to
decide on transferability of findings.
Comprehensive control over context conditions of the study.
Objectivity: independent of mind, actual; representing facts, not influenced by
personal feelings or opinions.

Consistency of meaning; free from bias; accounting of ourselves.


International issues
The complexity of the research design is greatly increased when working in an
international, multi-cultural and multilinguistic environment (Usunier & Lee, 2013,
p.188)
Try to establish equivalence at the various stages of the research process, to
enable comparisons to be made.
Cross-cultural equivalence Conceptual equivalence: Do concepts have a
similar meaning across the social units studied?
Cross-cultural equivalence Functional equivalence: Do concepts have the
same role or function across groups?
Cross-cultural equivalence Translation equivalence: Do concepts have the
same meaning after translation?
o Lexical
o Idiomatic

o Grammatical-syntactical
o Experiential
Cross-cultural equivalence Measure equivalence: Are measurement
instruments reliable across cultures?
o Perception varies across cultures (e.g., symbolic interpretation of colors)
Perceptual equivalence
o Scores given by respondents do not always have the same meaning (e.g.,
avoidance of extreme responses)
Metric equivalence
o There might be different basic units used when they are based on
different computation systems (e.g., differences in monetary units,
especially in high-inflation contexts)
Calibration equivalence
o There might be different development levels and technological
advancement
Temporal equivalence.
Cross-cultural equivalence Sample (unit) equivalence: Are the same
samples selected across countries?
o Sample of countries or culture
o Samples of individuals within the chosen countries or cultures.
Cross-cultural equivalence Data collection equivalence: Are there any
discrepancies between observed and true measurement?
o Secrecy/unwillingness to answer
Cooperation equivalence
o Response bias
Data-collection context equivalence
o Difference in response style (e.g., yea-saying, non-contingent responding)
Response-style equivalence.

International issues in qualitative research


Example: Data collection equivalence Response bias
Biases resulting from the relationship with the interviewer
o In many traditional countries, housewives are reluctant to grant
interviews to male interviewers.
o In some countries, the interviewees do not understand that the process of
interviewing them is for the purpose of generating objective data.
Guest lecture 1 Archival bases research I
o Archival based research: examines objective/original data collected from
existing repositories to extract evidence for a specific research question.
Practically speaking archival research involves two aspects:
1) Putting together an individual data set that allows to study a specific
question (this usually requires also encoding/editing data).
2) Analyzing the data to provide empirical evidence for a specific research
question (data handling and analysis).
o Although archival based research makes (extensive) use of data analysis it
should not to be confused with descriptive evidence! Descriptive evidence...
Delivers (only) simple descriptive evidence on an issue, e.g.: How many
firms are using IFRS vs. local GAAP?

What is the opinion of CFOs on IFRS?


is only a snapshot of WHAT IS at a point in time
offers no explanation or rigorous test of WHY IT IS
Theory-based evidence Archival based research seeks to test hypotheses
based on an economic or behavioral argument using real-world/natural data.
Typical examples from the field of accounting:
o WHY do certain firms voluntary adopt IFRS?
o WHICH TYPES OF FIRMS do voluntarily adopt IFRS?
o What are ECONOMIC CONSEQUENCES of adopting IFRS?
Basis for hypotheses
o Implications from analytical models
o Sound economic / behavioral arguments
o Implications from normative arguments
o Statements from regulators / standard setters
Strong link between research and the real world
Understand how the real world works
Results are often generalizable to other settings
Findings are often interesting to practitioners and regulators
o Economic consequences of structural or regulatory changes (e.g.,
changes in incentive schemes, introduction of IFRS)
o Detection models (e.g., earnings management, fraud, )
o Prediction models (e.g., bankruptcy)
o Investment models (e.g., based on ranking models)
o Use of the same (big) data available in the business world
Example: what is the effect of a particular law/rule?
Two exemplary papers examining such a question:

But: both papers rely on different research methods:


1) Archival study: ex post study on the effect or the rule change
2) Experimental study: ex ante study on the potential effect of the rule
change.
External validity: The extent to which results are generalizable to other tasks,
methods, and groups.
Internal validity: The extent to which we can reliably infer from the results that
two variables are causally related.
Three criteria that a relationship should meet to infer causality:
Covariance: cause and effect are related

Temporal precedence: cause precedes the effect


Non-spuriousness: exclusion of alternative explanations
Modern archival research often makes use of natural
experiments and quasi-experimental research designs.
(Stylized) Process of an Empirical Research Project
STEP 1)
1.a Research Question and Motivation
What has already been done? (Carefully scan literature, matrixapproach)
Innovative new research question
Refinement of existing research
1.b Data availability
Is the data you need available?
How easily can you get access to that data?
Note: A contribution can also be to utilize a new data set
Steps 1a) and b) are interactive in an empirical project.
STEP 2
2a) Derive defendable hypotheses
Are the hypotheses based on sound theory?
Does the story make (economic) sense?
Does the project deliver a true contribution?
2b) Collect data
Plan your steps how to collect the data
Your data-set should not be a black-box to you:
Understand the source it comes from
Understand its limitations
Are there any mistakes in the data that could bias your results?
STEP 3: Conduct empirical analyses
Typical analyses in an empirical project involve:
Univariate analyses
Multivariate regression analyses 1 (Main effects)
Multivariate regression analyses 2 (Expected cross-sectional differences
of main effects) Extensive set of sensitivity tests
STEP 4: Summarize story and findings
Draft summary tables
Complete writing of the paper
STEP 5: Road Show
Present and circulate your work to get feedback
In-house (Your colleagues)
Conferences (e.g., EAA, AAA, journals)
Doctoral workshops (e.g., EAA, EFMA, others)
STEP 6: Publication
Submit your work to an adequate journal
Expect a number of rounds until publication
Finally, complete page proofs
Of course, steps 5/6 are relevant for any type of academic research
(irrespective of the research methodology used)
Even when the working paper is ready to be sent to a magazine, there is still a
long way to go

Select an appropriate magazine


Journal rankings, different focus on topics and/or methods,
Submit in accordance with prescribed procedures
Editor selects two researchers (reviewers) who assess the paper
Referee report(s) often contain(s) in depth feedback on the paper
Response typically about 4-6 months after initial submission
Revise and resubmit, Reject, Reject and resubmit
Revise and resubmit: typically 1 year time to rework the paper. Submit
again, Reviewers evaluate reworked version (2nd round). Process
continues until paper is rejected or accepted. Not unusual that papers
take three to five rounds of revision before they are accepted for
publication.
Types of data sources
Primary data: Data that the researcher has developed or collected
specifically for the research project. => Private data (high cost to
collect + outcome uncertain)
Secondary data: Information previously collected by someone other
than the researcher and/or for purposes other than the current research
project.
=> Rapidly available
=> Not expensive (?), available (?)
=> Private or public data (public: risky competition)
Archival data resources
Free access data sources / hand-collected data: This can be a part of
the contribution of a paper.
Access via WRDS (Wharton Research Data Services):
Comprehensive web-based data management system.
Non-free access data sources: Be aware that other researchers typically
also have access to commercial databases and other non-free access
data sources.
Commercial databases for which you need a license (e.g.
COMPUSTAT, CRSP, BLOOMBERG, )
Datasets with privileged access (e.g. Deutsche Bundesbank for
tax research)
Private/proprietary data
Internal company information from managerial accounting
systems, e.g., from SAP (see e.g., Moers, 2005 AOS)
Databases from investment professionals (see e.g., Covrig
et al., 2007 JAR).
Databases from academic research positions at NYSE,
SEC, ECB, etc. (see e.g., Boehmer et al., 2008 JF)
Published documents
Annual report information (e.g. by scans vs. searchable
pdfs), eXtensible Business Reporting Language (XBRL)
Business magazines and other press articles (see e.g.,
Daske/Gebhardt, 2006 Abacus)
(Established) internet data sources, e.g. Yahoo Finance
(http://finance.yahoo.com)

Specialized internet data bases, e.g.: Loan Trade Database


from Loan Pricing Corporation (see e.g., Wittenberg-Moerman,
2005) Center for Responsive Politics
(Unstructured) web data (via web crawling), e.g.: Stock message
boards (e.g., SeekingAlpha)

Guest lecture Archival bases research II


- Online vs. offline:

CUSTOMER LIFETIME VALUE:


Which customers will be the most profitable?
How to attract them?
Acquisition How to grow their revenues? Cross-selling
How to retain them? Retention
CUSTOMER RETENTION: Data can tell you
which customers are most likely to churn.
BOOST ONLINE SALES: Designing the best
online and/or mobile advertising campaigns
(e.g. banner)
CTR= clicks/impressions
SENTIMENT ANALYSIS
Text mining
Online monitoring of eWOM on the
brand
Social media listening rooms

Decision problem identification and research question(s)


The manager and the researcher must work together.
These objectives guide the entire process.
Exploratory, descriptive, and causal research, each fulfill different
objectives.
The research plan is a written proposal that includes:
The management problem
The research objectives
The data needed
The research methods required, contact plans, sampling, instruments
to merge the data and store them.
How the results will help management decisions Budget
Implementation will involve collecting & assembling the data, check for data
quality and performing analysis using a variety of statistical tools.
Research plan:
Gather customer information: Which data sources?
Estimate a churn prediction model Which prediction model?
Budget How much money are we willing to spend per targeted
customer?
Data:
1. Collected from numerous sources:
Sales, customer characteristics, marketing mix, customer
service, social networks, online.
As many as possible!
2. Filtered and assembled into databases:
Merging and matching on customer IDs or based on similar
customer characteristics
Can be complex!
3. Turned into marketing knowledge
Model estimation and translation into managerial insights
Need for analytic and managerial profiles!
Internal data: retrieved from inside the company.

Advantages:
Can be accessed quickly and easily
Less expensive
Disadvantages:
Incomplete information
Timeliness of information
Amount of information
Inappropriate to a particular question or situation
Need for sophisticated equipment and techniques
4 types: Accounting/finance, Sales, HR (employees), Marketing.
Individual customer activity is the most important internal marketing data.
Collected via many touch points:
Demographics
Payments
Scanner data (purchases)
Customer care calls
Complains
Website visits
Social media
Data trends per industry:
Manufacturing From predicting new product success from using historical
data. To using online activity across many users
Wisdom of the crowd to maximizing stock market performance
Early signal using google search and twitter feeds
Retailing From brick and mortars
Aggregate sales data
With the emergence of loyalty programs:
Individual-level purchases
Non-purchase data
GPS on shopping carts, mobile scanners and apps. In-store tracing of
the shopping paths.
To online retailers:
Recommendation systems
Collaborative filtering
Cross-selling
Services: customer life-cycle

Online:
Click-through, Browser action, Dwelling time, Explicit judgment,
Reviews, Other page elements.
Lifestyle and entertainment:

Geo-localization
From Sources to Databases to Strategy (SDS Model)

Stored into customer databases, transaction databases, and product


databases.
Product databases: product features, prices, and inventories
attributes
Customer databases: customer characteristics and behavior
Transaction databases:
Data warehouses: Store entire organizations historic data
Designed specifically to support analyses necessary for decision
making
The data in the warehouse are separated into specific subparts, called
data marts, and indexed for easy use.
internal validity: Degree to which the tool measures what it claims to measure
Think about: Actual purchase data. Sentiment and text mining.
external validity: Degree to which findings can be generalized to other people,
context, places, times, settings. How accurate will Sales forecasts of other
products be?
CLV Forecasts of new customers be?
Timeliness of the data
Is the firm policy or the market (structurally) different?
Reliability: Consistency of findings, the extent to which similar observations
can be made by other researchers. Think about:
(Persistent) coding mistakes
Measurement error of the measurement tool (GPS tracker)
Objectivity: Subjective scales or actual behavior. Think about:
Actual purchase behavior (vs. purchase intentions)
Online clicking behavior (vs. online survey)
From data to knowledge: Data is the necessary ingredient for a learning
organization
The CTO is responsible for collecting/maintaining the data for the CMO
Marketing insight occurs between information and knowledge
Knowledge is more than information but resides in the employees
Employees create knowledge, computers are learning enablers

Big Data Lecture


- Google flu trends:
Method: use Google search queries to track influenza-like illness in a
population.
Because the relative frequency of certain queries is highly correlated
with the percentage of physician visits in which a patient presents with
influenza-like symptoms, we can accurately estimate the current level
of weekly influenza activity in each region of the United States, with a
reporting lag of about one day.
This approach may make it possible to utilize search queries to detect
influenza epidemics in areas with a large population of web search
users.
- What went wrong?
Quantity of data does not mean that one can ignore foundational issues
of measurement and construct validity and reliability and dependencies
among data.
The core challenge is that most big data that have received popular
attention are not the output of instruments designed to produce valid
and reliable data amenable for scientific analysis.
- Construct Validity: Does the thing measure what its supposed to measure?
- Criterion-related validity: does the thing correlate with a criterion we would
expect it to correlate with?
- Spurious correlations example: Other search queries in the top 100, not
included in our model, included topics like high school basketball which tend
to coincide with influenza season in the United States (Table 1). Basically
predicts winter.
But its not just about size!
Numbers do not speak for themselves! All researchers are interpreters
of data.
Conceptual frameworks are needed to organize knowledge and make
sense of data.
- Seeing patterns were none exist:

Construct Reliability: how comparable and stable are the


measurements? Googles search algorithm is not static.
Whats different about big data?
(Size): volume of data

Observational: automatic tracking


Adapted: secondary data, not collected for purpose of research,
challenges in interpretation.
Merged: often datasets are combined.
Whats big about big data?
Observations are rows, variables are columns
We often think of big data as being just more observations = more rows
But big data means more columns. This is a big methods challenge
Causality with Big Data:
We are often interested in causal questions. Experiments are the gold
standard in demonstrating causality. But Experiments are often tough /
expensive / unethical to run.
Can we use big data to identify causal effects without having to run an
experiment?
Spotify case:
Transition from owning to streaming entertainment media. not limited
to iTunes vs. Spotify not limited to music consumption.
Research question: What effects do adopting a streaming service (e.g.,
access business model) have on consumer behavior:
1. Quantity (total amount) of music consumed: to what extent do
consumers substitute away from traditional ownership (e.g., iTunes)?
2. Variety of music consumed: how many different varieties? and to what
extent are they superstars vs. long-tail?
3. Discovery of new music: does streaming facilitate discovery of high
value content? How much consumption is new, and how many times
are new discoveries listened to (proxy for value)?
Data: Track listening behavior over time, before and after Spotify adoption,
across a large set of music services.
Results:
1) Consumption growth
After adopting Spotify: users listen to 60% more songs in
total on all platforms. On average, 200 songs/week before
-> 320 songs/week after. Users listen to 34% fewer songs
on iTunes and 27% fewer songs on other platforms (e.g.,
Winamp).
2) Variety
After adopting Spotify: users listen to 31% more unique
artists. On average, 50 artists/week before -> 65
artists/week after. Adoption of Spotify reduces
consumption of superstars, except at the very top. Users
listen concentrate their listening on their own top artists
less.
3) Discovery
After adopting Spotify: Users listen to more new artists
than before. Downward selection: on average, they listen
to each new artist less than before. Upward selection:
Spotify adoption results in top new artists that receive
more plays.
Implications:

Platforms: Even after 6 months, consumption is 60% higher. Migration


away from owned to access
Artists: double-edged sword. Long term variety increase: easier to get
in consumers consumption set. Staying power declines.
Labels: Blockbuster strategy.
Consumers: welfare increases. Alleviates deadweight loss problem.
Lowers search costs.
Experimental research lecture
- Experiments are used to investigate causal relations:
- Conditions for causility:
X and Y should co-occur (correlation)
X proceeds Y in time
There is no other cause (Z) that explains the co-occurance of X and Y
There should be a logical explanation for the effect of X on Y
- The Experiment: Terminology Experiment = data collection method where on
or more IVs are manipulated to measure the effect on the DV, and where you
control for other causes
IV (Independent Variable X): Variable that is manipulated (also called
the treatment variable) E.g., price, packaging, degree of advertising,
employee bonus.
Ways to manipulate:
Present vs. absent (e.g., bonus vs. no bonus)
Frequency (e.g., high bonus vs. low bonus vs. no bonus)
Type (e.g., punishment vs. reward)
DV (Dependent Variable Y of O): Variable that is measured. E.g.,
sales, click through rate, purchase intention, attitude, motivation,
performance.
Can be nominal, ordinal, interval, or ratio
Extraneous variable (uitwendige variabele): Each possible variable
that can influence the DV, other than the IV.
- Imagine: Coca Cola wants to investigate the effects of POP displays on sales.
4 stores are chosen as test units. In 3 of these stores a POP display will
be used (POP1, POP2, or POP3).
In 1 store, no POP display will be used. All are AH stores, with an equal
yearly turnover.
Sales of Coca Cola is measured 5 days before and 5 days after
placement of POP displays.
- Experimental Design: Definition = a set of procedures that specifies:
Respondents (N)
IVs + exact manipulations
DVs + exact measurements
Possible extraneous variables
- Experimental design: Symbolic Notation
X: Exposure of a group of respondents to a treatment
O: Observation (measurement of DV)
R: Random allocation of respondents to treatments
- Experimental Designs: Validity Issues
Experimental studies have 2 main objectives:

To draw valid conclusions about the effects of IV(s) on DV


internal validitity
To make valid generalizations towards a broader group /
population external validity
Without internal validity no external validitity
Confound: A variable that threats the internal validity (Z). To prevent
confounds, extraneous and control variables should be included in the
design.
Threats to the Internal Validity
History: Specific events / factors, outside the experiment, have an
impact on the DV
Maturation: Biological and / or psychological changes over time
Testing: When prior testing affect the DV.
Extraneous (Uitwendige) Variabelen - CONT
Instrumentation: the observed effect is due to a change in
measurement
Statistische Regressie: extreme scores in the beginning are less
extreme towards the end
Selection Bias: incorrect selection of respondents (experimental and /
or control group)
Mortality: drop out of respondents during experiment.
Increasing Internal Validity: Controlling for Extraneous Variables.
Randomization: random allocation of participants to different conditions
(selection bias, but also instrumentation, history, mortality).
Matching: participants in different conditions are matched on number
of key variables (selection bias, but also mortality)
Design control:
Control group: include group that does NOT receive the
treatment (history and maturation, but also instrumentation and
statistical regression)
Extra groups: e.g., groups without pre-test, but with an
experimental manipulation (to exclude the effects of pre-testing)
(testing and statistical regression)
Statistical Control: measurement of the extraneous variables, and
include these in the statistical analysis (covariance analysis) (history
and selection bias).
Laboratory experiment: Artificial setting to have as much control as possible
over the manipulations. High internal validity.
Field experiments: Natural environment where manipulation is possible.
Problems with randomization. Problems to exclude external influences. High
external validity.

Pre-experimental design:

True experimental designs:

Quasi experimental designs:

Statistical Designs:
Randomized Block Design: control for 1 specific extraneous variable.
IV: humor (A: a lot; B: little; C: no)
DV: attitude towards store

Extraneous variable: frequency of store visit (a lot vs. medium


vs. little)
Factorial Design: test effects of more IVs
IV1: humor in ad (yes vs. no)
IV2: price promotion in ad (yes vs no)
DV: attitude towards store

Lecture Mono, Multi, and Mixed Research Strategies

Defining mixed methods research (Johnson, 2007, p.123): A type of research


in which a researcher or a team of researchers combines elements of
qualitative and quantitative approaches (e.g., use of qualitative and
quantitative viewpoints, data collection, analysis, and inference techniques)
for the purpose of breadth of understanding or corroboration
WHY integrate quantitative & qualitative elements?
Confirmation (triangulation):

Complementarity:
Confirmation impossible with mixed methods research
Qualitative and quantitative methods generate inherently
different knowledge.
Benefits Mixed Methods Research (Kroon & Rouzies, 2015):
It is possible to address both exploratory and confirmatory research
questions simultaneously.
It can clarify, complement, or explore alternative explanations for
relationships.
It could provide a methodological fit (nascent versus mature stage).
It provides a multidimensional picture of complex phenomena.
Challenges Mixed Methods Research (Kroon & Rouzies, 2015):
Mixed Methods are not a panacea.
Challenging design, because of the complexity inherent in collecting,
analyzing, mixing, and interpreting qualitative and quantitative data.
Time- and resource-consuming design.

Choices in research design:


Time orientation: Sequential versus Parallel
Priority: Equal versus Dominant
Stage of integration: Single versus Multiple
Assessing the quality of Mixed Methods Research:
Evaluate quantitative and qualitative methods independently on their
own criteria
Evaluate quantitative and qualitative methods independently but on
same overall criteria:
Rigor of the interpretation
Quality of the design:
Quality of the design:
Within-design consistency: The researcher should assess the
consistency of the procedures from which the inferences emerged .
Design suitability: The researcher should gauge whether the methods
are appropriate to address the research question.
Design fidelity: The researcher has to evaluate whether the procedure
is implemented with quality and rigor.
Analytic adequacy: The researcher has to judge whether the data
analysis techniques are appropriate to address the research question.
Rigor of the interpretation:
Interpretative agreement: The consistency of interpretation among
different members of a team or the different persons involved in the
interpretation of the data.
Interpretative distinctiveness: The degree to which the inferences are
distinctively different from other possible or rival interpretations.
Theoretical consistency: The researcher should evaluate whether the
inferences are consistent with the state of knowledge in the field.
Integrative efficacy: The researcher should assess whether the metainference adequately incorporated the inferences from quantitative and
from qualitative data/phases of the study.

Você também pode gostar