National Survey Of Student Engagement Education Essay

Published: November 21, 2015 Words: 3357

The National Survey of Student Engagement (NSSE) is a survey tool that is gaining popularity in the higher education sector to assess student engagement. The survey tool provides baccalaureate institutions with information from first-year and senior students regarding their experiences that can be used to positively impact retention.

A review of the NSSE survey tool provided the questions and areas of focus that are available to institutions. The website hosted by the Indiana University Center for Postsecondary Research provides a wide range of resources that provide a broad overview as well as specific methodology information on the psychomotor properties of the survey. Many institutions are relying on this survey to guide their strategic planning process, but is it a sound survey? A review of the Mental Measurements Yearbook provides insight into the psychometric properties of the test.

The critique provides insights of how the results can be used to guide strategic planning, validate compliance with accreditation standards, and improve retention. Strengths and limitations are addressed as well as a critique of a few questions. Overall, the survey results are meant to help institutions improve.

The NSSE survey is gaining popularity and for good reason. The staff members at the Indiana University Center for Postsecondary Research provide an abundance of resources to support institutions as they begin to review data, make sense of results, and decide how to use the results for improvement. They have even started to offer specialized tests for different sectors within the higher education market.

Test Critique

National Survey of Student Engagement (NSSE)

Higher education has been facing greater calls for accountability from external sources. Increased accountability has been reflected by accreditation bodies through demands for assessment data. Central Penn College and other colleges within Pennsylvania who are accredited comply with standards published by the Middle States Commission on Higher Education (MSCHE). These standards are referred to as The Characteristics of Excellence and are used by the Commission to evaluate institutions. Through increasing accountability from the federal government, Middle States has been requiring accredited institutions to not only conduct assessment, but use the data to inform change.

The National Survey of Student Engagement (NSSE) is a survey tool that is gaining popularity in the higher education sector to assess student engagement. The survey tool provides baccalaureate institutions with information from first-year and senior students regarding their experiences that can be used to positively impact retention. This tool's popularity rests in the ability to use the data to inform change. Several colleges have adopted and implement the National Survey of Student Engagement to gather college-wide assessment data to illustrate compliance and improve the learning experience for students. Improving the learning experience for students also impacts another important facet in higher education - retention.

While the calls for accountability are rising, funding and support from federal, state, and private sources are dwindling. As a result of these funding and accountability issues, students also have institutions that are competing to be the chosen school. Institutions are competing by reaching out for the best students by offering flexibility, greater access, transferability of credit, and scholarships. The fact remains that institutions must prove that they are comparable to other institutions in terms of student engagement. The National Survey of Student Engagement (NSSE) has become the preferred survey tool within the higher education community to demonstrate this information to students, parents, and external stakeholders. The survey instrument is also known as the College Student Report, but more frequently referred to as NSSE, consists of approximately 95 items. A review by Sauser in the Mental Measurements Yearbook notes that the items are "short, behaviorally based items for college students to rate on scales ranging from 2 to 7 points using a simple 'mark the box' format" (2012).

History and Purpose

The National Survey of Student Engagement (NSSE) was launched the Indiana University at Bloomington (IUB) also known as the Indiana University Center for Postsecondary Research in 2000. The survey was initially supported through a grant provided by The Pew Charitable Trusts and has been supported through participating institutional fees since 2002.

According to literature from the Indiana University Center for Postsecondary Research, the National Survey of Student Engagement was created to accomplish two core objectives:

Refocus the national discourse about college quality on teaching and learning

Provide institutions with diagnostic, actionable information that can inform efforts to improve the quality of undergraduate education (http://nsse.iub.edu).

The NSSE survey is administered to first-year and senior students to gather input about their educational experiences that serve to facilitate student development. The data gathered through the survey can be used by institutions in a variety of ways.

Data reports provided to institutions allow for comparison within the institution from the freshmen to senior experience, allow for comparison to identified comparison institutions, as well as comparison to the entire population of participating institutions. This enables colleges to establish internal and external benchmarks by which to assess effectiveness.

I work for Central Penn College in Summerdale, PA and student retention is of utmost importance and the focus of many collegial discussions. Research has indicated that students who are engaged with their college experience are most likely to be retained through to graduation. Institutions spend a good deal of money to attract students and it is more effective and efficient to retain those students who have been attracted to the institution. In 2012, Central Penn implemented its first NSSE survey. The results provided Central Penn with information that can be compared to other institutions, but more importantly the data can be used for assessment purposes to illustrate compliance with the fourteen standards provided by the Middle States Commission on Higher Education (MSCHE).

The NSSE Accreditation Toolkit maps test items to specific Middle States standards in order to assist institutions. "Institutions may find that NSSE results can apply to the following accreditation standards: Standard 2: Planning, Resource Allocation, and Institutional Renewal; Standard 6: Integrity; Standard 7: Institutional Assessment; Standard 11: Educational Offerings; Standard 12: General Education; and Standard 13: Related Educational Activities" (NSSE Accreditation Toolkit, p. 4). The NSSE toolkit maps each question to either one standard or a group of standards for use by institutions. Maintaining accreditation is critical to institutions because it is linked to federal financial aid, so the use of data to assess and validate compliance is beneficial. NSSE also provides institutions with information on how to use the results to not only meet accreditation requirements, but to meet the overall goal of helping institutions improve. NSSE is the tool used for institutions offering bachelor degrees, but similiar survey tools exist for different sectors.

Standardization Sample

Dr. Kuh and associates started the project with a literature review spanning 25 years about educational practices that impact student learning. NSSE used a pilot-testing program starting in 2000 and uses self-reporting by participants. The Guide to Your NSSE Institutional Report 2012 provides the following information about samples with participating institutions.

"All NSSE comparison reports are based on information from census-administered or randomly selected students for both your institution and comparison institutions. Targeted oversamples and other non-randomly selected students are not included in the reports." (p. 3)

The data is assessed for by institution and then across the span of participating institutions. An annual report aggregates and summarizes the data for comparison purposes. Institutions that administer the survey over several years can also receive a multi-year benchmark report of data quality indicators. This report provides multi-year data comparisons based on the number of respondents (n), standard deviation (SD), standard error of the mean (SEM), and the upper/lower 95% confidence intervals limits. The 2002 annual report relied on a sample of 285,000 students at 618 different four year colleges to provide summarized data about student engagement. To illustrate the growing popularity of this survey tool, it is noteworthy to point out that the 2011 participants represented 751 institutions in the United States and Canada and 537,605 student respondents.

Psychometric Properties

The Indiana University Center for Postsecondary Research publishes a Psychometric Portfolio for NSSE and routinely assesses the quality of the survey and results. The Psychometric Portfolio provides essential information about validity, reliability, and other quality indicators and reaffirms NSSE's commitment to continuous improvement for the higher education community. All information is published and available online at http://nsse.iub.edu.

Validity. NSSE researchers use seven different tests to measure overall validity to determine if the survey is measuring what is intended to measure. Researchers analyze response process validity, content validity, construct validity, concurrent validity, predictive validity, known groups validity, and consequential validity to determine the extent to which the interpretations of test scores match the intended use of the test. NSSE provides the following questions to guide researchers in analyzing each area.

Response Process Validity - "Do respondents understand the questions to mean what we intend them to measure?"

Content Validity - "Do they survey questions cover all possible facets of the scale or construct?"

Construct Validity - "How well does this group of items measure the theoretical concept?"

Concurrent Validity - "Do the questions measure the construct in the same way that other have measured it?"

Predictive Validity - "Does the scale correlate in predicted ways with outcome measures or other forms of engagement?"

Known Groups Validity - "Do the results of various subgroups match those from other studies?"

Consequential Validity - "Are the survey results interpreted and used in ways that improve undergraduate learning?"

Eugene Sheehan provided an evaluation for the Mental Measurements Yearbook on validity. He notes that "the technical and norms report contains a description of how items were selected to accurately assess the appropriate construct. Arguments regarding plausible and expected relationships between scales or between subgroups are also made. For example, patterns of responses between first-year students and seniors suggest the same validity." (p. 6)

Reliability. Reliability measures are used to determine if the data and results can be reproduced and whether or not the measurement is stable. NSSE researchers utilize three measures to determine reliability. They use internal consistency to determine how well items correlate to each other. Temporal stability measures the consistency of scores over time and indicates to what extent data and results can be reproduced. Temporal stability assists institutions to compare data after repeated administrations, which enable colleges to conduct longitudinal assessments on the data. The last measure is equivalence. Equivalence monitors correlations between items of similar constructs when they are applied to the same population. NSSE researchers have also identified questions to focus their work in the three areas of reliability.

Internal Consistency - "Do the items within a scale correlate well with each other?"

Temporal Stability - "How stable are the results for institutions and students upon repeated administration?"

Equivalence - "Do results correlate well with those of a similar measure on the same population?"

Sauser conducted an analysis of reliability in The Mental Measurements Yearbook, which indicates that "standardized item alpha reliability scores were .82 for the 20 college activities items, .88 for the 14 educational and personal growth items, and .83 for the 10 items measuring 'opinions about your school' (p. 16-17). Correlations of concordance of institutional benchmark scores from 2000 and 2001 ranged from .83 to .92 for freshmen and .76 to .89 for seniors. Matched t-sample tests (from the 2000 and 2001 administrations) resulted in all coefficients being statistically significant, ranging from .60 to .96. A test-retest reliability study yielded a coefficient of .83 (N = 569) across all items" (p. 3-4).

Other Quality Indicators. NSSE researchers also utilize other quality indicators to strengthen and validate the previously mentioned psychometric properties. NSSE indicates that these other quality indicators are used to "reduce error and bias, and to increase the precision and rigor of the data" (http://nsse.iub.edu). Researchers study self-selection bias, item bias, measurement error, data quality, mode analysis, non-response effects/bias, sampling error, and social desirability. The psychometric portfolio links to assessments in the various areas per year, but each indicator is not assessed annually. The researchers utilize the following questions to gain additional information about the survey design, administration, and outcomes.

Self-Selection Bias - "Are institutions that participate in NSSE different from other baccalaureate granting colleges and universities?"

Item Bias - "Are there NSSE items that are unintentionally favoring a particular group of students?"

Measurement Error - "Does NSSE take necessary steps in their policies, processes, and reporting to reduce the amount of error in the data?"\

Data Quality - "Do students provide enough answers to the NSSE survey? How many questions do they omit?"

Mode Analysis - "Do students who reply using the paper mode of NSSE respond differently than students using the Web mode?"

Non-response Effects/Bias - "Do students that respond to eh NSSE differ from those that choose not to respond to NSSE?"

Sampling Error - "Do institutions participating in NSSE have enough respondents to adequately represent their population?"

Social Desirability - "Are NSSE scores influenced by a desire to respond in a socially desirable manner?"

Reviewing the questions posed by researcher to assess validity, reliability, and other quality indicators causes me, as an administrator, to have a greater trust level in the survey instrument. The Mental Measurements Yearbook indicates that NSSE is a valid and reliable survey tool based on its psychometric properties.

Critique of a few sample items

Central Penn College prides itself on its caring faculty and the personalized attention that students receive from their faculty. NSSE has a grouping called Student - Faculty Interaction (SFI). Assumptions may lead one to think that Central Penn would naturally do very well in this area. Central Penn's first-year freshmen scores were above the Pennsylvania Peer Group and the NSSE 2012 group, but lower than the Carnegie class. When viewing the senior student results, Central Penn scored below all the groups. This was perplexing until an analysis of the underlying questions was conducted. Central Penn does not require research because faculty members are dedicated to teaching. Therefore, questions such as 1.s. "worked with faculty members on activities other than coursework (committees, orientation, student life activities, etc.) and 7.d "work on a research project with a faculty member outside of course or program requirements" are very limiting based on the unique characteristics of Central Penn. For online only institutions, questions such as 10.f "attending campus events and activities (special speakers, cultural performances, athletic events, etc.) can be equally limiting. In order to utilize the outcomes appropriately, institutions must identify how they will utilize the data to inform improvements based on the value added from the freshman to senior year while also identifying what data will be discounted in the overall evaluation of the data. Some data points will not be applicable to some schools for external benchmarking purposes and should be discarded. However, the data can be used to indicate growth within the institution from freshman to senior levels. In contrast to these two items, there are sample items that are less controversial when being applied across different institutions. Examples such as the following can be administered across institutions with relative accuracy because they are inherent in the educational process.

Question 2 - During the current school year, how much has your coursework emphasized the following mental activities?

Question 4 - In a typical week, how many homework problem sets do you complete?

Institutions must take time to evaluate their survey results to identify how they will be most meaningful and useful.

Usefulness for Participating Institutions

Since its implementation, NSSE has developed specific tools useful to an institution. These tools are highlighted on the NSSE website (http://nsse.iub.edu) and include:

A customized institutional report that provides responses by class and provides statistical comparison information for three comparison groups.

Engagement indicators that benchmark key NSSE indicators at your college and your comparison institutions.

Specialized summary reports that can be used to share information with internal or external audiences.

Student Data File that provides student identifiers and survey responses so data can be used for internal analysis.

Annual Results that share findings from across the nation about student engagement initiatives and best practices.

Some institutions use the data to improve retention and persistence. The data enables them to identify key areas from the Supportive Campus Environment (SCE) grouping that could offer guidance to needed programming. Some colleges have used the survey results to improve writing across the curriculum or encouraging student-faculty interaction. Data has also been used to improve specific support programs such as advising or a first-year experience requirement. Central Penn has chosen, as have some other colleges, to use the data to gather direct and indirect evidence of assessment that informs student learning outcomes of institutional effectiveness.

Regardless of how the data is used, NSSE provides supports such as a report builder, accreditation toolkits, resources to support sharing the data with internal and external audiences, and education events to support the participating institutions and engage cross-institutional dialogue about the many ways NSSE can be used. Researchers behind the survey are clearly invested in not only delivering a survey, but using results to help institutions improve.

Limitations of NSSE

This survey is limited due to its nature of focusing on first-year and senior student feedback regarding their engagement with bachelor-degree granting colleges. Due to these limitations, Indiana University at Bloomington (IUB) has created other tests that focus on the experiences and expectations that high school students bring to the first year of college as well as a test to focus on faculty perceptions of student engagement. These tests are referred to as the Beginning College Survey of Student Engagement (BCSSE) and Faculty Survey of Student Engagement (FSSE), respectively. Central Penn also participated in a Linking Institutional Policies to Student Services (LIPSS) survey that will help identify areas of strength and weakness. If we choose not to use these results for improvement, the institution is choosing to place a limitation on the use of results.

Other limitations include each institution's student response rate or sample size, the number of times a college has administered the survey to gain feedback, and how the survey results are being used. Small sample sizes do not allow colleges to make inferences based on widespread data. If a college is administering the survey for the first time, they do not have any benchmarks established and will need to repeat the survey to gain additional data that will be meaningful in guiding decisions. Lastly, a college needs to use the data to inform change; otherwise, they are wasting time and money to gather data that is used only for the sake of collecting data. This is the very issue that accrediting bodies, such as Middle States, are trying to battle in their assessment of institutions' use of data to inform change.

Summary

The National Survey of Student Engagement (NSSE) is a survey instrument that has been adopted by a variety of institutions across the nation. In his review, Sauser notes that "the NSSE is psychometrically sound instrument for the uses for which it was designed, measures what it is intended to measure, and yields interpretable benchmark scores for comparison across institutions" (p. 4). Institutions use the data to demonstrate compliance with accreditation standards, implement programs to retain students, and assess initiatives focused on engaging students at a comparable level to peer institutions. NSSE has demonstrated is validity and reliability and continues to test survey items to be responsive to the changing educational environment. Minor changes do not hinder institutions' ability to conduct longitudinal studies about student engagement to assess what is or is not working. Using a psychometrically sound survey that produces results that are actionable is the most appealing aspect of NSSE for institutions.

The 2012 Users Resource provided institutions with information on how NSSE continues to utilize data to improve student learning. In January 2010 NSSE started working on a project funded by the Spencer Foundation. NSSE has engaged in Learning to Improve: A Study of Evidence-Based Improvement in Higher Education. In conclusion, the Indiana University Center for Postsecondary Research provides a psychometrically sound survey tool in addition to an abundance of supporting resources for institutions of higher education. The use of survey results from NSSE give institutions detailed data that can be used to inform change and improvement to the educational environment.