CSI’s Department of Education regularly collects, analyzes, and uses data to make sure we are providing our students with quality education and resources.
The data we collect assess how well we are meeting the goals and objectives in our department’s Conceptual Framework. The pillars of our conceptual framework are intellectual autonomy and professional responsibility: We strive to nurture educators who are life-long learners and education advocates. We hope to inspire a conviction in our student not only to improving their students’ and their own understanding of themselves and their world, but also to improving the teaching professional itself through collaboration, high ethical standards, and innovative practice.
Our conceptual framework is summarized in this diagram:
Click on the diagram to open a larger version in another window
You can read about our conceptual framework here
We don’t simply talk about lofty goals, we check to see how well we’re reaching them. When we find areas to improve—in our courses or programs—we make changes and ten monitor those changes in large part through continuous assessment. This comprehensive assessment system includes data from a variety of sources. Data come from students, faculty, school personnel, and our alumni. Currently, data are collected through our newly-purchased online database, HigherEd. Therefore, participation of all share holders through HigherEd is a requirement of our assessment system.
The way data flow through the department is depicted in this graph:
Click on the diagram to open a larger version in another window.
Details about the Data-Driven Decision Process
Expounding upon the flow of data that is represented in this schematic, the steps taken in making data-driven decisions for the Department are:
1 ) Whenever a student reaches one of the four transition points, data are collected from and about the him/her. Most of these data are collected through Tk20.
- The Key Assessments are presented arranged by the transition points in these tables of the transition points.
- The Key Assignments are arranged by Conceptual Framework Objective and course in these Program Portfolio Matrices.
These transition points serve as “gates” for our students: they cannot progress through a given transition point until they have met the minimal conditions delineated there. Therefore, this means that it is imperative that the faculty frequently monitor where the students in their program are in terms of their current transition point and that all students have the support and information they need to progress in a timely fashion. Simply put, the transition points must serve as a way for us to stay on top of our students and their needs—not as impediments in their progress through their program.
Students, therefore, input several types of information and assessable materials through Tk20. Among these are applications to programs, field experiences, and educational research seminars; Key Assignments for their Program Portfolios, the field experience Self-Evaluations, and completing surveys, such as the Candidate Exit Survey (as well as responses to other surveys as needed, such as the Student Needs Survey Battery).
Among the primary ways that faculty (both full and part) will contribute to the collection of data, then, are assessing students at various times throughout their progress through their programs. Faculty will assess applicants’ applications, instructors will assess student’s Key Assignments and the concomitant Reflections, and field experience supervisors will evaluate students in the field as is delineated in the Student Teaching/Special Education Practicum Handbook.
In addition to the data collected at each transition point, we also collect data about our graduates through the alumni survey and the prospective employer survey.
2 ) All data except that from the cooperating teachers are collected through Tk20 Faculty can access several reports digesting various amounts of student data. Students, in turn, access the feedback from key assignments, reflections, supervisor fieldwork observation evaluations, and their progression through transition points through Tk20.
3 ) All of the key assessment data are managed by the Director of Assessment, El Samuels. Dr. Samuels prepares the data for interpretation by department faculty and HEO staff. Dr. Samuels can also provide varying levels of raw data to the faculty and HEO staff for them to analyze themselves.
4 ) Dr. Samuels prepares the data for presentation to the faculty, staff, and students in ways appropriate and informative for each audience.
5 ) The Assessment Committee is the first departmental body responsible for regularly reviewing the data. This committee meets at least once a semester, at which time it reviews and interprets the data and initial analyses performed by Dr. Samuels. Included in this review are checking on on-going performance among the students and the department (to ensure that performance levels are at least maintaining current levels), monitoring the effects of changes made in the past, and discussing ways of improving the validity and utility of the assessment system in cost- and time-effective ways.
The Assessment Committee is chaired by the NCATE coordinator. Members of the committee include the Director of Assessment, members of the department with strong backgrounds in quantitative and qualitative data analysis, a representative from Institutional Research, teachers and administrators from partnering schools, and representatives from other institutes of higher education.
The Assessment Committee is charged with the task of making recommendations to other bodies within the department (as well as to other college agencies whose work influences the performance and support of the department’s students) based on their interpretation of the data.
6 ) Typically, the Assessment Committee makes its recommendations to either the Teacher Education Advisory Committee (TEAC) or the Partnership Advisory Committee (PAC). It may also make recommendations to other bodies as necessary, such as the Undergraduate and Graduate Curriculum Committees.
TEAC is the primary venue for stakeholders from other areas of the college to review data, be apprised of developments and issues affecting education students (and prospective education students), and have input on course, program, and departmental operations. TEAC meets at least once a semester and is chaired by the Dean of Humanities and Social Sciences.
PAC is the primary venue for stakeholders from partnering schools and the community; consequently, the membership is comprised of teachers, administrators, and community leaders (as well as members of the department who work closely with students when they are in the field). PAC is chaired by the Director of Fieldwork and also meets at least once a semester.
In addition to input made by those in attendance at any of these committee’s meetings, all three of these committee solicit feedback from members who were unable to attend by informing them of the conversations that took place at the meetings and asking for their input through email, phone conversations, and informal face-to-face meetings (although email remains the main way committee members who could not attend stay in the loop).
7 ) The committees to which the Assessment Committee makes recommendations may decide to move a given proposal forward to the full department faculty for further action, it may ask the Assessment Committee for more information before acting, or it may decide that no action is necessary.
8 ) If a proposal is forwarded to the full departmental faculty, the faculty will discuss the proposal and may or may not decide that the issue requires more action. The full faculty may also request more information from either the forwarding committee or the Assessment Committee.
9 ) Any changes or actions enacted by the department are then monitored by the Director of Assessment and the Assessment Committee at future meetings. The results of this monitoring are discussed at Assessment Committee meetings, and further action may be taken depending on the data.
A summary of data-driven changes implemented as a result of this Assessment System is given here.
How We Ensure That Our Assessment Procedures Are Fair, Accurate, Consistent, and Free of Bias
The unit takes seriously threats that compromise the integrity of its assessment system. The following measures have been taken to ensure that its assessment procedures are fair, accurate, consistent, and free of bias:
The unit makes every effort via multiple means to inform candidates of all the requirements as early in their program as possible. Materials are given to students when they inquire about applying to—and are admitted to—programs. These efforts reflect the unit’s commitment to fairness and that candidates should have equal access to pertinent information.
Rubrics that are used to assess candidates’ work are shared with the candidates before they are assessed so they have a clear understanding of what is expected of them. These rubrics, developed through faculty collaboration, are consistent among candidates and programs.
In addition, the same issues are assessed through different assessment tools in order to ensure accuracy and fairness. When, for example, multiple assessment measures point to a weakness in a given programmatic area, then the information is discussed and used to drive program improvement.
The unit ensures accuracy by verifying (through collaboration within the unit and across the institution’s departments) that assessment measures are linked to the unit’s conceptual framework, INTASC Standards, NBPT Standards, the New York State Education Department Standards, and the standards delineated by the various professional organizations.
The graduating and the alumni surveys contain the same questions. Overall correlations between responses on the two surveys are assessed to measure consistency. The first graduating surveys were distributed in spring 2004. The first alumni surveys were distributed in spring, 2007.
Free of Bias
Assessments are governed by CF-linked rubrics that assess candidates solely based on performance relative to the rubric and not on any personal, cultural, etc. factors. An important point where bias is eliminated is entry into the Education program. Applicants’ admission into a program is based solely upon objective criteria related to success as an educator. Ensuring that all applicants have an equal opportunity to programs eliminates a large source of potential bias. Faculty assesses the writing samples of applicants for writing skills and dispositions. While samples that were rated lowly on the rubric do not keep a candidate from being admitted to the unit, they serve as an indication of inappropriate or undesirable dispositions. However, the unit has excluded applicants on rare occasions, with the possibility of future reapplication.