Despite the University’s initial seemingly skittish response to the use of AI, UR has now started embracing this advancing technology through the release of its own chatbot.
Last month, the University released chat.rochester.edu, a variation of the ubiquitous ChatGPT licensed from OpenAI, an American company known for developing the GPT family of AI tools, through a partnership with Microsoft. This chatbot is the University’s first main internal effort to provide GPT-4o to its faculty, staff, and students.
This new chatbot is an independent system from the larger, commercially available ChatGPT. This means that OpenAI’s neural network cannot access or train on data gathered from the University. This emphasis on privacy was one of the main driving factors behind the University’s initiative to offer the service.
The service comes with some other privacy considerations for affiliates. When asked whether the University had access to students’ queries when using the chatbot, Associate Vice President & Chief Technology Officer Dr. Robert Evangelista told Campus Times that University access would be limited without legal permission. “If we were given permission [from] our legal counsel or legal consul of outside entities, … there’s ways that it’s probably like one or two people that could actually get into the nitty-gritty of the system and see questions,” he said.
The chatbot is part of the school’s greater “2030 Strategic Plan,” which the University describes as being “focused on research excellence, quality education, and a forward-thinking vision.”
“[This] was our first whack at it,” Evangelista explained. “We know it needs to be more robust and have all sorts of other features and functionalities. The next thing is [to] pull in files and analyze [them].”
Currently, the features included in the chatbot are equivalent to the features available in the free version of ChatGPT. However, Evangelista said that students can expect more features, such as file uploading in the next few weeks.
In the near future, the University hopes to also include image creation from text-based prompts according to Evangelista.
“[With] these features we will be implementing, it will be very similar to the OpenAI ChatGPT,” he said.
The current program uses information from University websites sourced from fall of 2024 – meaning that it cannot actively search the web. This leads to some additional problems and constraints, most notably ‘AI hallucinations’ – when an AI program invents information, according to Evangelista. “Once in a while, since it’s an older LLM [Large Language Model], [it can suffer from] hallucinating – you know, it’s just bad data.”
To fix this problem, the University plans to update the language model with a new and more updated version.
With the introduction of this new technology, there are still many unknowns. When asked what the University is doing to help educate students on AI use, Evangelista explained that, currently, the University has two subcommittees that are, “working on training in the education and what AI can do, and how to understand the risks.”
By the time 2030 arrives, Evangelista hopes that the chatbot will continue to improve.
“When it comes to AI in general,” Evangelista said, “I hope that the tools are there helping the researchers really come up with things that are making the world better.”