Profweb

Home » Publications » Articles » Hubert.ai Chats with Your Students to Obtain Course Feedback

Publications

Articles

Published November 1, 2018 | Multidisciplinary

Hubert.ai Chats with Your Students to Obtain Course Feedback

While obtaining student feedback has been an integral part of my teaching practices from the very beginning, it was never a truly satisfying or enjoyable activity. Over the years, I experimented with different ways of obtaining feedback, most of which were either very labour-intensive (open-ended questions) or overly formalised (multiple-choice questions or ratings). The advancement of artificial intelligence (AI) has reached a point where chatbots can now handle text-based conversations and analyze them. This term, I am experimenting with Hubert.ai, a platform that harnesses the power of AI to obtain meaningful feedback through chat sessions with my students. So far, this has proven to be an effective and engaging format for everyone involved.

Benefits and Inconveniences of Obtaining Student Feedback

The benefits of obtaining course feedback from students have been widely documented. Apart from the insight it provides into the students’ appreciation of the teaching and learning activities, such feedback also has more indirect outcomes, as highlighted in a 2009 OECD report:

  • Feedback has a strong positive influence on teachers and students alike.
  • Feedback significantly increases teachers’ motivation and desire for professional development.
  • Feedback increases job satisfaction.
  • The more detailed the feedback, the more empowered students and teachers feel.

While the potential positive outcomes are clear, obtaining meaningful feedback from students is not always an easy feat.

  • Effective feedback strategies should be frequent, quick, and informal. However, such an approach may quickly become heavy and cumbersome if it requires students to repeatedly fill out surveys.
  • While student conferences or small-group discussions often yield the most meaningful, in-depth comments, they do not allow students to express themselves anonymously. This has an impact on participation rates and the type of comments shared. This approach is also very time-consuming.
  • Surveys are the most frequently-used method to obtain feedback from students. Whether administered on paper or in electronic format, they might be perceived as being overly formal by students. The use of multiple-choice questions and Likert scales may constrain the variety of comments, while including too many open-ended questions does not allow for an efficient analysis of answers.

Over the summer, I discovered Hubert.ai, a platform that mimics the richness of face-to-face conversation and open-ended questions, yet offers the benefits of electronic surveys:

  • Students give feedback anonymously.
  • Administration and collection are quick and easy.
  • Answers are analyzed automatically and presented in different report formats.

Who is Hubert.ai and how does it work?

Hubert is a chatbot trained to extract qualitative data from chat-based conversations with students, in the same way a teacher or facilitator would. The platform offers a fully-functional demo to show how students experience the chat process. At the time of writing, Hubert.ai is in its second beta version, which means features are expected to be improved and added in the near future. Creating a teacher account is entirely free, although the website’s disclaimer does reserve the right for the company to charge a fee for certain features in the future.

I found signing up for Hubert and starting to use it to be extremely intuitive. On the homepage, the Create a Free Account button takes the teacher to a form requesting basic information: name, affiliation and email. Students do not need to create an account; they access the chat-based evaluation directly through a clickable link. Hubert offers integration with email, Facebook and Google, and you can also simply copy the link and provide it to students in another way.

Hubert.ai offers a quick and intuitive signup procedure, available from the homepage.

Setting up a chat-based evaluation is equally simple. Once logged in, you can create a new evaluation from the Evaluations menu at the top of the screen. Hubert.ai does not yet support user-generated questions, but allows you to build an evaluation by selecting several parameters:

  • The role of the person being evaluated (teacher, guest speaker, tutor, professor, etc.)
  • The object being evaluated (the course, an activity, an assessment, a presentation, etc.)
  • The duration (availability) of the evaluation (minimum 15 minutes and maximum 1 month)
  • A question set (start/stop/continue, 2 stars and a wish, quick check)

At the time of writing, Hubert.ai offers 3 different question sets. Combined with the evaluation object, these allow you to customize the chat-based evaluation.

Once the availability period is finished, Hubert.ai automatically generates a Results page. From this page, it is possible to download a transcript file compiling all of the students’ comments, much like the type of file you would obtain if you administered a survey using Google forms.

Hubert.ai also generates 2 reports – a summary and a more detailed version. Using its text-based analysis capabilities, Hubert.ai determines an Overall Experience rate, expressed in percentages of positive, negative, and neutral comments. The report also lists frequently mentioned keywords that, based on the context students mentioned them in, are categorized as Strengths or Improvables. Finally, the detailed report also includes a sample of comments that the artificial intelligence considers representative of the overall sample, again based on the keywords they contain.

Example of a summary report generated after one of my evaluations. The percentages have been omitted.

Example of a detailed report generated after one of my evaluations. The percentages and sample comments have been omitted.

Reports, sorted chronologically, remain available from the Evaluations menu until you decide to delete them. Each evaluation is considered independent, so if you wish to frequently obtain feedback, or work with different groups, it is important to clearly label each evaluation when creating it.

My Experience with Hubert.ai

After about 10 weeks of using the platform, I can confirm that obtaining student feedback this way has been a positive experience both for myself and my students:

  • My students clearly prefer the chat-based format over a form-based survey. This motivates them to answer the questions, even when I ask them to do so at home.
  • Hubert.ai cleverly prompts students to develop their answer if they only type a few words. This means I obtain more meaningful feedback than I did using Google forms, where students often wrote a mere “yes” or “no.”
  • Setting up and administering a chat-based evaluation literally take minutes, which makes it possible to frequently obtain feedback without it becoming too time-consuming or heavy for the students.
  • The text-based analysis, even though not perfect, facilitates getting a general feeling of the students’ appreciation without closely reading all of the specific comments. It works particularly well when soliciting feedback on a specific assignment or activity, but is less precise when asking students to evaluate the course as a whole.

I will definitely continue using the platform with my current and future student groups. Over time, I hope the platform will make it possible to add specific questions for Hubert to ask, rather than having to choose from 3 pre-determined sets – although these have admittedly worked well for me. Further development of the artificial intelligence will hopefully also improve its analytical capacities, so the keywords identified as Strengths or Improvables become more meaningful.

0 comment(s)

Comment

* required fields
Type of comment*