Teaching Assistant Evaluation

To determine the Performance of a Teaching Assistant

Easy

|

21 Submissions

University of Wisconsin-Madison is concerned about the performance of its teaching assistants. They’ve been keeping a proper record of various performance parameters and have even manually assigned them scores.


Objective

Your task is to determine the Performance of a teaching assistant i.e to find which of the following categories (1=Low, 2=Medium, 3=High) the TA belongs to.


Evaluation Criteria

Submissions are evaluated using Accuracy Score. How do we do it? 

Once you generate and submit the target variable predictions on evaluation dataset, your submissions will be compared with the true values of the target variable. 

The True or Actual values of the target variable are hidden on the DPhi Practice platform so that we can evaluate your model's performance on unseen data. Finally, an Accuracy score for your model will be generated and displayed

About the dataset

The data consist of evaluations of teaching performance over three regular semesters and two summer semesters of 151 teaching assistant (TA) assignments at the Statistics Department of the University of Wisconsin-Madison. The scores were divided into 3 roughly equal-sized categories ("low", "medium", and "high") to form the class variable.

To load the dataset in your jupyter notebook, use the below command:

import pandas as pd
ta_data = pd.read_csv('https://raw.githubusercontent.com/dphi-official/Datasets/master/Teaching_Assistant_Evaluation/Training_set_ta.csv')

Data Description

  • ES: Whether the TA is an English Speaker or not - binary (1 = English Speaker, 0 = Non - English Speaker)
  • Instructor: Course instructor - categorical (25 categories)
  • Course: Course - categorical (26 categories)
  • Semester: Summer or Regular - binary (1=Summer, 2=Regular)
  • Class_Size: Size of the class - numerical
  • Performance: Teaching performance over three regular semesters and two summer semesters - categorical (1=Low, 2=Medium, 3=High)

Evaluation Dataset

Load the evaluation data (name it as 'evaluation_data'). You can load the data using the below command.

evaluation_data = pd.read_csv('https://raw.githubusercontent.com/dphi-official/Datasets/master/Teaching_Assistant_Evaluation/Testing_s

Reference

This dataset was downloaded from the UCI Machine Learning Repository - https://archive.ics.uci.edu/ml/datasets/Teaching+Assistant+Evaluation

 

loading...

You need to choose a submission file.

File Format

Your submission should be in CSV format.

Predictions

This file should have a header row called 'prediction'.
Please see the instructions to save a prediction file under the “Data” tab.

To participate in this challenge either you have to create a team of atleast members or join some team