Dataset: ICLR In this assignment, we are working with manuscripts and their reviews from a famous CS conference, ICLR (International Conference on Learning Representations). This is a top conference i

Posted: March 11th, 2022

Dataset: ICLR In this assignment, we are working with manuscripts and their reviews from a famous CS conference, ICLR (International Conference on Learning Representations). This is a top conference i

Dataset: ICLRIn this assignment, we are working with manuscripts and their reviews from a famous CS conference, ICLR (International Conference on Learning Representations). This is a top conference in computer science on machine learning.Each manuscript have 2 – 3 reviews. Each row in the training.csv and test_contentonly.csv represent a review to a specific manuscript. They contains the following columnsid: id of manuscriptreviewer_name: name of reviewer for this manuscripttitle: title of the manuscriptabstract: abstract of the manuscriptcomments: review texts of this manuscript by a specific reviewerdecision: final decision (1 if the manuscript was accepted or 0 otherwise).The decision column was not directly listed in the test_contentonly.csv. Instread, it was listed in test_label.csv.Grading policyWe will grade based on your code notebook (Python notebook or R markdown file) on GitHub. Your codes should have clear documentations of the process you take and decisions you have made. Also discuss your results when appropriate (see the problem descriptions below).1. Supervised methods (60 pts) Please use Python or R to do the assignment In this task, you need to predict whether a manuscript is accepted (1) or rejected (0), based on the review texts.1.1 Dictionary method (20 pts)Use the dictionary method to predict whether manuscripts in the test data were accepted or rejected.list the dictionaries you usedDiscuss how you construct your dictionary (e.g., by reading and summarizing, using embedding, etc).1.2 Supervised methods (20 pts)Use the dictionary method to predict whether manuscripts in the test data were accepted or rejected, using training.csv as the training data.1.3 Evaluation (20 pts)Compare supervised learning’s performance with dictionary methods, based on the testdata. The correct labels are provided in test_label.csv. Report the following:PrecisionRecallF1 scoreAUC score (of ROC curves).Discuss whether supervised methods or dictionary methods yield better performance. And what makes you achieve a good prediction performance?

Expert paper writers are just a few clicks away

Place an order in 3 easy steps. Takes less than 5 mins.

Calculate the price of your order

You will get a personal manager and a discount.
We'll send you the first draft for approval by at
Total price:
$0.00