Data Analysis Demystified — 5 Key Reporting Methods

by
Ramsey Walker
5
min. read

Striving to interpret survey data but don't have a Ph.D. in math or data science? Discover five easy-to-use data analysis methods: graphical summaries, frequency tables, descriptive statistics, margin of error, and crosstabs. With these methods, you can make informed decisions and gain actionable insights.

Introduction

Have you ever questioned how companies make decisions that truly connect with their intended audience? In order to achieve success, companies and researchers often rely on surveys to collect valuable insights into customer trends, attitudes, and preferences. By analyzing this data, businesses can enhance their product and service development.

Numerous analysis techniques are at your disposal, but we've pinpointed the top five approaches that deliver meaningful insights in minimal time. In this article, we'll focus on analyzing quantitative data sourced from surveys with predominantly closed-ended queries and substantial sample sizes.

#1 Graphical summaries 

When you have received hundreds or even thousands of survey responses, deciding where to begin your analysis can become a daunting task. Thankfully, graphs provide a user-friendly starting point. They visually showcase patterns and trends within the data.

Most survey software will automatically report on aggregated results using pie charts or bar graphs. Researchers often export their data to third-party software, like Microsoft Excel, to expand their analytics capabilities and access a broader range of graphical visualizations. 

Let's dive into six common charts and graphs researchers use to report on their data. We'll use an example of a university-run survey covering student body size, academic performance, and more.

To start, let's explore the data in pie charts, bar charts, and histograms.

Image of pie, bar, and histogram charts summarizing survey data.

Pie Charts display how different categories of a single variable compare to the whole. Think of them like a pizza, where each slice represents a different response to a question.

Bar charts are tools for comparison, similar to measuring sticks. They can be laid out horizontally or vertically, and they're used to show and compare the number, frequency, or percentage within one or more categories.

Histograms are a type of bar chart for data that falls within a range such as ages or weights. They display how many responses fall into different groups or "bins." Histograms are a handy way to see how responses are distributed.

Now let's move on to scatter plots, line charts, and stacked bar charts.

Image of scatter plots, line charts, and stacked bar charts summarizing survey data.

Scatter plots are essentially a field of dots used to examine the relationship between two variables that can take on many different values. They are beneficial in finding trends, detecting patterns, and identifying any unusual data points or outliers in your data set.

Line graphs typically track data over time. They're also effective in highlighting differences in changes across various groups.

Stacked bars visually represent data in several categories within a single bar. Each bar can vary in length when representing counts or display a fixed length when representing percentages within a group. Stacked bars allow for comparing category totals as well as the category distribution within each bar.

When presenting survey data, it's wise to use a mix of charting types to convey your message effectively. These visual aids simplify the interpretation of complex data by highlighting patterns and trends that may not be evident through text alone. Proper use of these tools can elevate presentations and enhance the communication of survey findings.

#2 Frequency tables

If you've analyzed survey data before, you've probably already encountered the term "frequency table." But what exactly does it mean? In simple terms, a frequency table is a chart that tells you how often something occurs in your data. By organizing your data into categories and counting how many times each category appears, you can quickly understand the distribution of your data.

Image of a frequency table summarizing the responses to a survey question.
Frequency table

In the instance of the Centiment Survey Tool, frequency tables are provided directly under the charts for each applicable question. These tables offer another way to quickly interpret the response counts for question choices in a tabular format.

On the surface, frequency tables can seem like a basic output of a survey report without many key insights. But, when you start to apply data filtering, frequency tables can be a simple yet powerful tool to draw conclusions. If your survey software does not include frequency tables, they can quickly be created in Excel or Google Sheets.

Filtering

Survey filtering generally refers to the process of sorting and narrowing down survey responses based on specific criteria. This technique is used to segment the data and analyze specific subsets of responses. It's an essential step in data analysis as it can help researchers focus on the most relevant data and quickly identify trends or patterns.

Let's review some of the most common filters.

  1. Demographic filters include age, gender, location, education level, and other profile data.
  2. Response filters incorporate responses to specific questions. For instance, if you only want to see responses from people who answered "Yes" to a particular question.
  3. Completeness filters can eliminate respondents who didn't fully complete the survey or were disqualified.
  4. Date filters are helpful if you're interested in how responses have changed over time.

Many survey software providers off advanced response filtering capabilities, including the Centiment Survey Tool. Using built-in filters can enable you to efficiently cut your data set by multiple variables, saving you time and effort.

Once you have established your desired filters, you can easily export your data to CSV or SPSS. From there, you can construct intricate graphs using Excel, Google Sheets, or more advanced software like Stata, SPSS Statistics, SAS, or R.

#3 Descriptive statistics

Descriptive statistics are concise mathematical summaries that provide insights into your data set. They offer information on the primary characteristics of the data, including measures of center and spread. Here are some examples:

  • The average height of a group of people is 5'8".
  • The median income of a group of people is $50,000.
  • The most popular color in a survey is blue.
  • The range of scores on a test is from 90 to 100.
  • The standard deviation of scores on a test is 10.

As you can imagine, these are the kind of stats you would layer into a presentation. Now let's dive into the most common statistics used to summarize survey data.

Measures of center

Mean is the average of a data set, indicating a central point in the data.

Median is the middle value when all the responses are arranged in ascending or descending order. If there are an odd number of responses, the median is the value exactly in the middle. If there are an even number of responses, the median is the average of the two middle values.

For example, if you survey people asking how many books they read in a year and get these responses: 1, 5, 9, 12, 17, the median would be 9. The median is particularly useful when data are skewed or outliers are present.

Mode refers to the response or data point that appears most frequently in the dataset. In other words, it's the most common answer given by the participants.

Measures of spread

The most common measures of spread that can help you to understand the dispersion of your results are standard deviation and range.

Standard deviation gauges the dispersion of data points from the mean and is the most commonly referenced measure of spread to describe a data set. A low standard deviation means values tend to be closer to the mean, while a high standard deviation means values tend to be further from the mean. Imagine it as a typical "distance" each data point varies from the average. It's most insightful when your data displays a bell curve shape, or what many researchers call a “normal distribution”.

For instance, if you measured the heights of students in a classroom, a high standard deviation would indicate that the heights vary widely, while a low standard deviation would suggest that most students are close in height. 

Calculating the standard deviation is a more advanced exercise, but the good news is many widely available data tools (like Microsoft Excel or Google Sheets) can calculate the standard deviation for you. To understand how to calculate standard deviation on your own, reference this article.

Range is pretty straightforward. It's the difference between the largest and smallest value.

Percentiles and quartiles are methods to divide data. For example, if your test score is in the 90th percentile, you scored higher than 90% of participants. Quartiles split data into four equal parts; if your score is in the fourth quartile, you did better than 75% of participants.

Understanding measures of spread are valuable when interpreting data, as they can provide a deeper understanding of the patterns present.

#4 Margin of error (MOE) and statistical significance

MOE and statistical significance are used to help understand the results of a study. They are closely related, which is why we are addressing them together.

  • Margin of error is a measure of how precise a survey result is.
  • Statistical significance is a measure of how likely a result is due to chance.

Let's unpack these statements a bit more.

MOE quantifies the maximum amount by which the sample results are expected to differ from the actual population. Simply put, it tells you how confident you can be in the survey's findings.

When conducting quantitative research, having a 5% margin of error is commonly accepted. To illustrate, if a survey reports that 30% of people exercise daily, the actual percentage could be anywhere from 25-35%. However, if you'd like to have your research published in a journal or picked up by the press, setting a smaller margin of error; ideally around 2-3% is recommended.

Margin of error is important to take into account at the outset of your study planning as it informs how large your sample size should be. Calculating MOE depends on three variables: the sample size, the population size, and the level of confidence desired. The larger the sample size, the smaller the margin of error will be. We won't get into the math in this article. Fortunately, Centiment offers a margin of error and sample calculator that can quickly crunch the numbers for you.

Moving on from MOE, statistical significance determines whether the differences observed between the groups being studied are genuine or merely coincidental. If a result is considered statistically significant, it implies that it is improbable to have happened due to pure chance or random occurrence. For more on this topic, reference A Refresher on Statistical Significance.

You generally want a low error margin and a high statistical significance. This means that you want to be confident that your results are accurate and that they are not due to chance.

#5 Crosstabs

Have you ever sifted through a mound of data, wondering if there's any correlation between your variables? Crosstabs are a tool used in statistical analysis that allows you to examine relationships between two or more variables. They help you identify patterns and trends in your data, giving you a better understanding of any potential correlations that exist.

We can create filters all day long, but often key insights can be found in the relationship between multiple variables. You might want to know how your survey results differ by generation or region of the US. This is where crosstabs come in. At a basic level, they are more detailed frequency tables broken out by multiple variables to help us identify trends and compare outputs.

Imagine a survey comprised of 32 questions, of which 2 questions are demographics on age and gender. If you wanted to break out your demographics across every question in the survey, you would generate 31 x 2 = 62 crosstabs in total. To illustrate the structure, below is a single crosstab comparing age against gender.

Image of a cross-tabulation reporting on the data of two survey questions.
Crosstab

Crosstabs can seem like a lot of data to soak in at first glance, but once you get oriented to the layout, they become a powerful tool for survey analysis.

Their outputs can be customized. The most common breakout includes the cross-section count as well as the row (stub) and column (banner or tab) percentages.

Crosstabs give you a bird's eye view of how two or more questions intersect, allowing you to fill in the blanks and make sense of it all. So, put on your detective hat and give crosstabs a try; you might uncover some hidden insights that can help drive your business forward.

Conclusion

Learning to analyze survey data using various methods is crucial to interpret a dataset correctly. Our discussion included graphical summaries, frequency tables, descriptive statistics, margin of error, and crosstabs. These are merely the basics, however, and there are countless other methods to improve your analytical skills. These foundational elements serve as a starting point to build upon as you broaden your knowledge and become more proficient in your analysis capabilities.

Now that you have a grasp of the fundamental tools for data analysis, it's time to collect your sample and embark on your survey analysis journey. Whether the data originates from current customers, employees, or a panel service, interpreting your data correctly is essential for success.

Need respondents? Reach 100+ countries.
Ready to take action? Start building your survey now.
Want one of our Research Pros to drive your project to success?

RElated blog posts

No items found.