Test Scoring & Questionnaires
-
Formula for calculating the median for USRI reports
Introduction Universal Student Ratings of Instruction (USRI) course evaluations gather feedback from classes to help instructors, departments and faculties improve curriculum and instruction. As part of the results which are compiled, an average is calculated. This article will describe the formula that is used to calculate the median for USRI reports. Applicability This article was written for instructors and support staff at the University of Alberta. Details The arithmetic mean of a set of numbers is commonly called the average of the numbers. Its method of calculation is well known. A synonym for 'average' is 'typical' which may be a helpful word to use given the computational baggage that is associated with average. Average also becomes a troublesome word when it brings to mind 'mediocre' as a synonym. The median is another 'typical' value which may be used to represent a larger set of numbers. A simple definition of median is that it is the middle value in a set of numbers which have been ordered by magnitude. This definition causes a problem when there is no middle value because there are an even number of values, or when there are several numbers having the same value around the point where the middle should occur. A more general definition of median is that it is the 50th percentile of the frequency distribution formed by counting the number of times each value occurs. When the distribution of numbers is symmetrical, such as that of the so-called bell curve, the mean and median are equal. When the distribution is not symmetrical, debate arises concerning which is the more typical number. An example often used concerns average income where the arithmetic mean may be quite different than the income of the typical wage earner because of the skewed distribution of values. In this case, and often in the case of course/instructor rating scales, the median more closely approximates the income of the typical worker or the rating given by the typical student. Calculation of the median using the idea of a grouped frequency distribution allows one to recognize that, for example, a 5-point rating scale constrains responses to a small set of discrete values when the underlying attribute being measured is really a continuous scale. Evidence of this is observed in the collection of students' ratings of instruction when we observe that some students mark two consecutive values in an attempt to communicate that they're not sure if they want to award, for example, a 4 or a 5. We have also observed on a number of occasions that a respondent will make a mark, for example, between the 4 and the 5. Neither of these types of responses provide valid data but they do illustrate the presence of a continuous scale underlying the small set of discrete values. Calculation of the median in such situations proceeds as follows. If the distribution of responses given by a class of 25 students is: 1 Strongly Disagree, 1 Disagree, 4 Neutral, 8 Agree, and 11 Strongly Agree and the values 1 through 5 are assigned as ratings corresponding to Strongly Disagree through Strongly Agree, the mean is 4.08. The median is computed as the value attributed to the 50th percentile point in the distribution of ratings given by the 25 respondents. Six responses are Neutral or lower while 14 indicate Agree or lower. The point 12.5, 50% of 25, is thus in the interval corresponding to Agree which ranges from 3.5 to 4.5 when the distribution is considered to be continuous rather than consisting of the discrete values 1 through 5. We need to travel (12.5 - 6) = 6.5 out of 8 units along the interval between 3.5 and 4.5. Therefore the median is computed as 3.5 + (6.5 / 8) = 4.31 which is a value that more closely reflects the consensus of the raters; almost .25 of a 'rating' higher than the mean. The above can be summarized by the formula: where: L = lower limit of the interval containing the median (3.5 in the example above) I = width of the interval containing the median (1.0 in the example above) N = total number of respondents (25 in the example above) F = cumulative frequency corresponding to the lower limit (6 in the example above) f = number of cases in the interval containing the median (8 in the example above). Keywords: USRI, reports, Calculating the Median, tsqs, median, calculate, formula,
-
USRI Reference Data
Introduction Universal Student Ratings of Instruction (USRI) course evaluations gather feedback from classes to help instructors, departments and faculties improve curriculum and instruction. This article will describe the use of USRI results Reference Data. Applicability This article was written for deans, chairs, students and instructors at the University of Alberta. Details The columns of reference data display statistics from Tukey's box and whisker plot analysis(John W. Tukey, Exploratory Data Analysis, AddisonWesley Publishing Company, Inc. 1977).The values displayed are derived from all the classes in the indicated reference group. These statistics are chosen to achieve two main objectives: 1. summarize skewed distributions of data, and 2. identify outliers from the general population if they exist. The median value (middle of a ranked set of numbers) is generally preferred over the mean to identify the centre of a skewed distribution of scores. This is the value below which 50 percent of the medians from other classes lie. Please note that data for the items in the current set of mandated questions are accumulated from Academic Year 2005/06 and beyond.If an item(question) has not been used at least 15 times by the indicated reference group since then, the reference data cells will be filled with the text: "too few uses". It is theoretically possible for all median scores in a single year to be above, or below, the Reference Group median. The 25th and 75th percentiles provide information about the spread of scores around the median. By definition, twenty-five percent of the scores are above the 75th percentile and twenty-five percent are below the 25th percentile. Since this occurs by definition, these valuesshould not be used to determine whether a particular score is good or bad. The lower Tukey Fence, which is the 25th percentile minus 1.5 times the distance from the 25th to the 75th percentile, defines a reasonable limit beyond which a score can be considered an outlier. Outliers are scores that appear to be outside the usual distribution of scores for the population being tabulated, i.e., for the indicated reference group. Given the nature of the USRI data, the upper Fence will usually be above 5.0 and, therefore, need not be reported. Please note that some items can be expected to elicit higher ratings because they are closer to apple pie types of items, i.e., we would expect the item to be rated quite positively. This is illustrated by the campus-wide results accumulated in the years 20002004 for the two items shown below. Item Tukey Fence Reference Data 25% 50% 75% The instructor treated students with respect. 3.4 4.3 4.6 4.8 Overall, the quality of the course content was excellent. 2.9 3.8 4.1 4.3 This suggests that the median obtained for the first item in a particular class can be expected to be 0.5 of a rating above that for the second item simply because that has been found to be the case in results from thousands of classes surveyed at the University of Alberta. Note that the 25th percentile for the first item corresponds to the 75th percentile for the second item. Also, the reference group used for a particular class consists of all classes in the indicated department or faculty. One of the most consistent findings of researchers studying students' ratings of instruction is that the ratings obtained for items such as those addressing general satisfaction with a course or instructor depend on the discipline in which the course is taught. Franklin and Theall (1995) reported that "Professors in fine arts, humanities, and health-related professions are more highly rated than their science, engineering and math-related colleagues." There appears to be a combination of reasons for these differences including, differences in the characteristics of the students, in the nature of the subject matter, and in the course objectives that are emphasized in different disciplines. The sizes of the differences and the conclusion that they are not necessarily related to characteristics of the instructors in the different disciplines leads to the advice that "we must continue to be very cautious about, if not prohibited from using, the results of student evaluations to make comparisons across disciplines" (Marincovich,1995). For example, the item "Overall, this instructor was excellent." illustrates that results at the University of Alberta are consistent with the research studies. The reference data from some of the departments in which a large number of classes have been surveyed appear in the following table. Department Tukey Fence Reference Data 25% 50% 75% Physics 2.4 3.7 4.1 4.5 Computing Science 2.5 3.7 4.1 4.5 Electrical & Computer Engineering 2.7 3.9 4.2 4.6 Mathematical & Statistical Sciences 2.8 3.9 4.2 4.6 Earth & Atmospheric Sciences 3.0 4.0 4.3 4.6 Biological Sciences 3.1 4.0 4.3 4.6 English 2.8 4.0 4.4 4.7 Modern Languages & Cultural Studies 2.9 4.0 4.4 4.8 History & Classics 3.4 4.2 4.5 4.7 Elementary Education 2.7 4.0 4.5 4.8 Drama 2.9 4.1 4.7 4.9 References Franklin, J., and Theall, M. "The Relationship of Disciplinary Differences and the Value of Class Preparation Time to Student Ratings of Teaching." in N. Hativa and M. Marincovich (eds.), Disciplinary Differences in Teaching and Learning: Implications for Practice. San Francisco: JosseyBass, 1995. Marcinovich, M. "Concluding Remarks: On the Meaning of Disciplinary Differences." in N. Hativa and M. Marincovich (eds.), Disciplinary Differences in Teaching and Learning: Implications for Practice. San Francisco: JosseyBass, 1995. Keywords: tsqs, reference data, USRI reports
-
Recommended Reading for Creating Multiple choice tests
Introduction Occasionally, inquiries about scoring multiple choice tests begin with the more basic question: "Can you suggest some references that provide advice on construction of multiple choice tests?" This article will provide a list of textbooks and resources recommended by Test Scoring and Questionnaire Services (TSQS) as a response to this question. Applicability This article was written primarily for instructors and support staff at the University of Alberta, although it may also be useful to students in the Faculty of Education. Details The following textbooks provide advice on construction of multiple choice tests: Haladyna, Thomas M. Developing and Validating Multiple-Choice Test Items. Hillsdale, NJ: Lawrence Erlbaum Associates, 1994.Haladyna, Thomas M. Writing Test Items to Evaluate Higher Order Thinking. Needham Heights, MA: Allyn & Bacon, 1997.Jacobs, Lucy Cheser & Chase, Clinton I. Developing and Using Tests Effectively: a Guide for Faculty. San Francisco: Jossey-Bass Inc., 1992. Osterlind, Steven J. Constructing Test Items. 2nd ed. Boston: Kluwer Academic Publishers, 1998. See also:Writing Multiple-Choice Test Items, and IDEA Papers from Kansas State University such as No. 16: Improving Multiple-Choice TestsJacobs, Lucy C. How to Write Better Tests: A Handbook for Improving Test Construction Skills. Indiana University, Bloomington A publication produced by the National Board of Medical Examiners is available (in pdf format) at http://www.nbme.org/about/itemwriting.asp. Although it is written with reference to Basic and Clinical Sciences, the guidelines for writing good items apply to the more general case. The publication is 181 pages in length. Keywords: Multiple choice tests, Recommended Reading, test, multiple choice
-
Additional Options for Generating Class lists with GPSCOR and MRSCOR
Introduction When completing the GPSCOR/MRSCOR Request for Service Form, the option to generating additional files may be selected as follows: 1. a file which is suitable for uploading to eClass. (This is done by marking the bubble next to eClass and writing your Class ID on the line immediately below.) 2. a comma-delimited file (*csv) that is easily opened using Excel This article will explain the file that is generated by making this request. For general information on the options available for generating class lists, please see the relevant section in KB0012169 for GPSCOR and in KB0012170 for MRSCOR. Documentation on uploading the eClass file is available here: Gradebook utility. Applicability This article was written for instructors and support staff at the University of Alberta. Details eClass Specific The eClass file will contain each student's last name, first name, id number, user id, and total score in the format expected by eClass (using comma delimiters). Normally, two files will be generated when you request an eClass file: The first is a file with the suffix .log which contains a report of the matching process used to associate the data from the scanner file with the class list obtained from the Registrar's database. The second file is the end product of the matching process. It will normally have the suffix _w.csv. If answer sheets are scored which we did not (could not) match to the Registrar's class list, a third file will be created with the suffix _u.csv. Please note that the first line in the *_w.csv file is a title line describing the data. It contains the text: Last Name,First Name,Student No,User Id,OMR Score. You will usually find it helpful to replace "OMR Score" with a more meaningful title. If the list of incorrect responses is requested the column heading for this field will be "Items Wrong" while the column heading "Scored Responses" will appear if the scored (R/w) responses have been requested. The matching process is an iterative one that should generally result in an accurate association of the data in the scanner file with the student identification information obtained from the Registrar's database. The following steps are used (and may be proofed by examining the *.log file): Answer sheets having Names and ID numbers on them which correspond exactly with the information in the database are identified, matched and eliminated from further consideration. Answer sheets having ID numbers that are identical and Names which are reasonably similar to those in the database are identified, matched and eliminated from further consideration. Answer sheets having ID numbers which match the last 6 digits and Names which are reasonably similar to those in the database are identified, matched and eliminated from further consideration. Answer sheets having Names which are reasonably similar to those in the database are identified, matched and eliminated from further consideration. Identification information on any remaining answer sheets is manually examined and associated with students remaining in the Registrar's class list if such an association appears reasonable. Answer sheets that, in the judgement of the TSQS operator, cannot be matched to the official class list are not included in the eClass score file. These records, if any, will be identified in the *.log file under the heading "Number of remaining unmatched scanner records" and then placed in a separate file with the suffix *_u.csv. The final information appearing in the *.log file is a listing of any students in the Registrar's class list who do not have an associated answer sheet identified in the scanner file. This list along with the list produced in step #6, in particular, should be reviewed before uploading the score file to eClass. Excel Specific The contents of the Excel file depends on which program is used to create it. If the Scansort program is used, the file will contain any biographical information that was read from the answer sheets (such as name, idnumber, special codes) plus all the scores derived from the keys that were used to process the job. If Examtab is used to create the Excel file, only 1 column will be created for the score which will be the score derived from the key for the appropriate test form. In this case, an option also exists to include a column indicating which key was applied to respective answer sheets. The first field in the Excel file contains the sheet number that was printed on the answer sheets when they were processed. Relevant to both types of files There are two optional text fields that may be included in these files. You may request either or both of these fields; the default is neither of them. The first option, obtained by marking the Wrongs bubble, is a list of item numbers corresponding to incorrect responses by the student. The field begins with W= which is followed by the numbers of the items that were incorrect. If a student has no incorrect responses, the field contains W=NONE. The second is a list of scored responses. This field begins with R= and is followed by alphabetic values corresponding to the student's responses. If a response is correct, the character will be in upper case. If an incorrect response will be in lower case. If the responses were scanned as numeric values, the numbers will be replaced by their corresponding alphabetic characters. If a response was not given for an item, '-' will appear in the corresponding position. If more than one response was given'*' will indicate this. If a correct response was not provided for an item (on the key), the corresponding character in the output will be '.'; 3 ... are used to indicate that 3 or more consecutive items were not keyed. If an operator was asked to confirm the match, a '?' will appear between the identification number and the name in the log file. Keywords: scoring reports, results, scoring, class list, eClass file, .csv, excel file, tsqs
-
IDQ Course Evaluations for Team Teaching Format
Introduction This article describes the use of Instructor Designed Questionnaire (IDQ) System in Team-Teaching, where multiple instructors are evaluated on a single questionnaire. Applicability This article was written for instructors and support staff at the University of Alberta. Details A special format is available in the IDQ system to support evaluation of courses which involve multiple instructors. The questionnaire is arranged such that the questions that apply to the overall course appear first. These are followed by questions that apply to the instructor. Instructor-related questions are repeated for each instructor involved in the course. This format avoids burdening students by asking the same course-related questions multiple times draws attention to the idea that the number of questions asked about a particular instructor should decrease as the number of instructors increases. When these questionnaires are processed, the data are separated such that a separate report is generated for each of the instructors appearing on the questionnaire. Results for the common, course-related questions are included on each instructor's report. Keywords: Course evaluations, USRI, Team Teaching, reports, IDQ, TSQS
-
Scoring Multiple Choice Tests
Introduction Multiple-choice tests are a popular method of providing reliable and valid measures of achievement in many courses on campus. In recent years, their usage has increased as class sizes have grown, making other measures of achievement practically infeasible. Test Scoring and Questionnaire Services (TSQS) uses Optical Mark Reader (OMR) technology to capture and score responses on appropriate OMR forms and, then, offers a comprehensive array of analysis and reporting procedures. This article will describe the services offered for test scoring as well as an overview of the test scoring process and the programs available for scoring multiple choice tests. Applicability This article was written for instructors and support staff at the University of Alberta. Details Services offered for test scoring Optical Mark Reading (OMR) is accurate and efficient and results are delivered in a maximum 24-hour turn-around on weekdays. They may include: raw scores printed on each answer sheet class lists with names, id numbers and scores comprehensive item analysis reports comma-delimited results files suitable for opening in Excel or uploading to eClass merging results from alternate forms of a test into a single report combining (weighted) scores from sub-sections of a test into a single score formula (right minus wrong) scoring rescaling scores to percentages or other totals, or to specified means and standard deviations reports of right and wrong answers given by each student, and more. An overview of the steps involved in the test scoring process Select the scoring option (see below). Select the appropriate answer sheet for your test (Answer sheet style 1 / Answer sheet style 2.) Obtain answer sheets. The University of Alberta Bookstore has answer sheets available for purchase if more than 100 are required. Otherwise, you can get them from the TSQS office in 240 GSB (General Services Building). Fill-in the answer sheet key. After your test is completed, gather the students answer sheets, making sure that all the answer sheets are included. Complete a Request for Service form. (This form is available here.) Send a printed copy of Request for service form, along with the key, and students' answer sheets to the TSQS office at 240 GSB. After TSQS has processed your exam, you will receive an email including: Class list, Item Analysis, and any other files requested. (Outputs from scanning.) Exam scores will be ready for pickup or can be sent by campus mail to your office. Options for Scoring Two generalized programs are available for scoring instructor-developed multiple choice tests. GPSCOR (General Purpose Scoring Program) is used for scoring tests when students are expected to respond with only one answer per question. For more information on GPSCOR, please see: KB0012142 MRSCOR (Multiple Response Scoring Program) is provided for situations where students are allowed or expected to respond with more than one answer per question. For more information on MRSCOR, please see: KB0012141 Each of these programs may be used to score tests when the students' responses have been recorded on one of the three types of General Purpose Answer Sheets that are stocked on campus. Keywords: TSQS, Test, Scoring, Multiple Choice, services, options, overview, how does it work
-
TSQS Charges and Rates
Introduction This article will describe the charges and rates for Test Scoring & Questionnaire Services (TSQS). Also included is some information on how charges are assessed, including sections on Operator Charges, Scanning Charges and Web Surveys. Applicability This article was written for instructors and support staff at the University of Alberta. Details Different rates are charged for Test Scoring & Questionnaire Services depending on the source of funding. University rates are offered to clients when the bill is to be debited against a University-administered account. Bills paid in any other manner are assessed the External rate. Current rates are as follows: Item Unit University External IDQ Questionnaires Each 0.17 0.17 Operator Time Hour 27.00 36.00 Scanning Charge Sheet/Booklet 0.06 0.08 Booklet Charge Sheet 0.01 0.01 Analyst Time Hour 60.00 60.00 Reports Printed sheet 0.20 0.20 A minimum of 10 minutes is assessed for each occasion of each activity. Operator Charges Charges for operator time are assessed for: creating IDQ questionnaires (when the client does not use the Online Requisition system.) scanning answer sheets. generating reports (beyond the first set which is normally included in the scanning charge.) routine processing of Web Surveys. Beyond the minimum charge, operator time can be inflated by the following situations: Generation of IDQ Questionnaires: Unusual requests (such as producing a questionnaire containing many unique questions). Submitting separate requisitions for each class. Requisitions causing uncertainty that require a number of phone calls in order to discover what is being requested. Scanning Time: Poorly marked forms (including those using liquid pens, miscoded id-numbers and/or cross-outs, rather than erasures) Jobs submitted in disarray (e.g., forms not oriented in a consistent direction, damaged forms) Use of MRSCOR (Multiple Response Scoring Program). Scanning Charges Use of the term 'booklet' refers to documents that consist of two (2) or more sheets that are processed as a single entity. Booklets are assessed the basic scanning charge, plus one cent for each sheet comprising the booklet. Volume discounts are offered for the scanning charges of single-sheet documents based on the total number of sheets scanned for a given account within each month: Typically, up to 400 sheets can be processed in ten (10) minutes. Large jobs requiring minimal operator intervention are processed at approximately 4000 sheets per hour. Web Surveys Analyst rates are assessed for designing web surveys that are deemed non-routine while operator rates are assessed for those that build on existing templates (i.e., are simple modifications of surveys that have been created earlier.) New (non-routine) surveys will usually require two (2) or more hours of analyst time. In addition, operator rates are assessed for e-mailing reminders and for downloading responses from the web server. The scanning sheet charge is applied for each response record that is retrieved. Keywords: TSQS, Charges, rate schedule, rates, cost, price
-
MRSCOR (Multiple Response Scoring Program)
Introduction Test Scoring and Questionnaire Services (TSQS) offers two types of scoring: GPSCOR (General Purpose Scoring Program) and MRSCOR (Multiple Response Scoring Program) This article provides some additional information on MRSCOR, which is utilized for situations in which students are allowed or expected to respond with more than one answer per question. Applicability This article was written for instructors and support staff at the University of Alberta. Details MRSCOR provides a variety of scoring options. (In all cases, only a single key sheet is used.) Scores may be computed by only focusing on the responses that the student made. In this case, the number of correct responses is counted, the number of incorrect answers is counted and a final score is reported which would normally be R - W. (The option is also provided to apply a weight other than one, to either R or W.) The total, unweighted, score possible in this case, is equal to the number of answers marked on the key sheet. Note that this means that the possible score for a question depends on the number of responses marked for that question on the key. Scores may also be computed by summing the number of correct behaviours performed by the student. i.e., summing both the number of times that a correct answer is marked and the number of times a choice is correctly left blank. The total, unweighted, score possible in this case, is equal to the number of questions on the test multiplied by the question length (the number of response choices) of each question. A third alternative is a two-step process. After using the second option (above), a program can be run that converts the output file into a GPSCOR (General Purpose Scoring Program) file. In the conversion process, complete questions are marked right or wrong (depending on whether or not the complete pattern of responses and omits matches the key that has been provided). In this case, the total score possible is equal to the number of questions on the test. Please refer to the instructions for completing the Optical Mark Reader Request for Service Form in KB0012170. Keywords: MRSCOR, multiple answers scoring, scoring options, TSQS
-
Generating IDQ Questionnaires for a Class
Introduction The Instructor Designed Questionnaire (IDQ) system assists to reduce irrelevant items often associated with standardized questionnaires; allows for the inclusion of items relevant to the individual instructor, university, the faculty and the department; and allows for normative as well as individualized feedback on the quality of instruction. This article describes the methods of generating Instructor Designed Questionnaires (IDQs) as well as outlining the additional requirements in order for responses to be collected in a Universal Student Ratings of Instruction (USRI) survey. It also includes links for information on additional features including generating IDQs for use in team-teaching classrooms. Applicability This article was written for instructors and support staff at the University of Alberta. Access to IDQ Requisitions Online, is restricted to designated individuals: our regular 'IDQ contacts'. Normally, we have one 'IDQ contact' in each Department. To gain access to this system, please contact Test Scoring and Questionnaire Services. Procedure Method of generating Normally, to generate your questionnaires, you must use the Online Requisition system. From the information that is provided via the Online Requisition system, we create "upload files" for identification of department, faculty, students, courses and enrolments (an automated process). This automated process depends on the accuracy of the information in PeopleSoft. It demands that The appropriate instructor(s) - as shown at https://tsqs.srv.ualberta.ca/cgi-bin/classids/cls_query.pl - has/have been identified for the class through the use of their employee id(s) and The students enrolled in this class correspond to the information available from PeopleSoft. When these do not hold, the departmental contact will be responsible for creating the above set of “upload files” and providing them to Test Scoring and Questionnaire Services (TSQS.) Contact TSQS to obtain the specifications. Additional Requirements for generating an IDQ The General Faculties Council (GFC) Policy, requiring at least 10 students in a class before numeric responses may be collected in a Universal Student Ratings of Instruction (USRI) survey, will continue to be enforced. CoursEval supports the merging of classes into a single survey for an instructor in order to meet this minimum and/or to make allowance for the fact that credit for different courses may be received within a given class. (This has been supported by our IDQ system for many years.) CoursEval may be used for administering questionnaires that are restricted to “comments only” when the minimum enrolment is not met (4 to 9 students). The block-id, 3OPN, contains a set of 4 recommended questions. CoursEval also supports team-teaching scenarios where open-ended questions may be repeated for each instructor and the reporting of these comments is individualized to the appropriate instructor. This eliminates the need that has existed in some departments of having separate questionnaires printed for each instructor in a team-taught class in order to achieve confidentiality among instructors. Related information Additional information about the Features of IDQs is available in KB0012148. Additional information about Generating IDQS for Team Teaching is available in KB0012144. Keywords: IDQ, questionnaires, usri, course evaluations, tsqs
-
USRI Search Tips
Introduction This article describes how to search for USRI reports online Applicability Deans, Chairs, Students Details The USRIs results can be accessed using the link below: https://tsqs.srv.ualberta.ca/cgi-bin/usri/usri.pl The search for reports begins with an option to sort the results by Instructor's Last Name or by Course Title. The result of the search will be a listing of all classes that meet your criteria. They will be sorted in the order that you have selected. You are then required to select the specific classes for which you wish to see the actual reports. If you choose to search for results from more than one year, the list of classes will include an indication of the year. The year is not displayed if you are only searching for data from one year. You are given the option of indicating some text for which to search. The search will be performed on the Last Name field or the Course Title field depending on the sort option that you chose above. Note that if, at any time, you press the Reset Values button on this page, the sort option is reset to its default which is to sort by Course Title. This may confuse you if you are searching for an Instructor's Last Name and don't notice the effect of resetting the sort option. The search procedure assumes that the first character that you type will be the first character in the text for which you wish to search. If you wish to search for a specific course number, type the first one or two characters of the Course Title, leave a space, and then type the course number that you wish to locate. Note that typing a single digit for the course number does not yield only the courses beginning with that number. Instead, all courses having that number anywhere in their titles will be selected. The number of instructor/class names that will be displayed as a result of the above search has been restricted to 500. Keywords: USRI, Search, results, Tips