Eliciting Effectiveness of Elicit (AI research lit review tool)

Elsewhere in OEG Connect I have been posting notes of trying out various new Artificial Intelligence tools.

This all started with one of many valuable critical perspectives on Artificial Intelligence by Emily Bender, specifically here on whether AI can do literature review (the thread is well worth it as well as her post on AI Hype Takedopwns)

https://twitter.com/emilymbender/status/1518428712781238273

She refers to a new venture, Elicit identified as an “AI Research Assistant” - if I understand correctly it provides GPT-3 powered AI to help frame research questions
queries using the APIs for Semantic Scholar.

I decided to try it out so created an account. Besides the description of Elicit on its main web site, the email responding to my account creation contained a bit more:

The main workflow in Elicit is literature review, which you see on the home page. This workflow tries to answer your research question with information from published papers.

Elicit currently works best for answering questions with empirical research (e.g. randomized controlled trials). These tend to be questions in biomedicine, social science, and economics. Questions like “What are the effects of ____ on ____?” tend to do well.

and also incliuded a link to an explainer video. I am more the type to jump in and try and return to watch the video later.

Also, my discoplaimer, is that I am not regularly a researcher, I just like to find and learn about things. Thinking of topics of OEGlobal interest, scope, i experimented broadly with “How effective is open education in Asia?” and clicked to view the outcomes column

Again, not being a researcher on this topic, I cannot judge the results.

For a second try I entered “How many institutions have an open education policy?” and where I think it might help is how it suggests different search queries

The last one looked most interesting “How do institutions with an open education policy compare to those without one?”

Again, it’s outside my expertise to say how valid or invalid the AI is here-- I would love to hear more from the practicing researchers here if it is useful or not.

At least it’s a break for generating weird images of robots writing at a desk :wink:

Is anyone willing to give Elicit a try and report back here?

Nope! :wink:

Ok, maybe.
Because:

Also, this:


Mort Kunstler, Disco Dilemma, For Men Only cover painting, January 1968” by cartoonpinup is marked with Public Domain Mark 1.0.

So, my first impressions, after a couple of minutes…

I might enjoy it! :smiley:
And what makes a huge difference, for me, is the human approach behind the tool. (Yes, I’m a technopedagogue.) It feels like it can be quite useful as a tool. And the team clearly has an interest in user-centred (and maybe even user-driven) design.
Rather easy to use, especially since they give us examples of research questions.
Perhaps more importantly (at least, most importantly for me), they manage expectations in the part you shared about where Elicit currently works best. :+1:t4:
And, yeah, it works for that kinda thing. Which is very far from my approach to research. Which means Elicit can actually help me, since delegating that “empirical research with randomized controlled trials” to a 'bot is very fitting.

I like the fact that it uses Semantic Scholar (so, Linked Open Data, LOD!). That’s precisely the approach we want in such a context. I’ve heard researchers in ML/AI dismiss LOD. Yet LOD is a way to bridge the gap between human knowledge and machine processing.

I also like the fact that I can export in .bib format, that I can restrict results to those with full PDFs, that it avoids Google Scholar altogether…

So, in that couple of minutes, I became the proverbial happy camper.

Here’s what I did.

  • I clicked on the link you shared for the ‘splainer vid. Noticed it was 18’ long. tl;dw
  • Registered my email.
  • Got this Typeform survey about how I’m going to use Elicit.
  • Described myself as an independent researcher.
  • Entered a quick description of my (longterm, slow, self-funded, informal) research project: Participatory-Action Research on Inclusive Learning through Music Technology.
  • Declared my main pain point potentially solved by Elicit as: translating my work to robotese (i.e. geekspeak).
  • Added a comment about cutting the AI hype and getting some work done.
  • Got the confirmation email.
  • Clicked on one of the research questions in that email.
  • Noticed that I was already logged in.
  • Glanced at the results for that question: typical “randomized controlled trial” stuff. :yawning_face:
  • Changed some words in that question to fit my project: What are the effects of music technology on inclusive learning practices? | Search | Elicit.

I must say, the first result surprised me.
ASI | Free Full-Text | Music Technology as a Means for Fostering Young Children’s Social Interactions in an Inclusive Class (mdpi.com)
Not surprised that Elicit would find it. That it existed in the first place. As happens so frequently in research, I was too caught up in my own ways to realize that others had similar ideas from a very different angle.

And that’s among the most valid reasons to do a lit review instead of just pseudo-random searches online.

Wish it had more license info, with those sources. And/or it could help transform those dry research papers into usable material for learning (which might require remixing; particularly hard with CC-BY-ND PDFs!). And the need for an account does raise an orange flag. (For instance, there’s an OER database in our network which eliminates sites which require registration. Because children.
And Human Rights: https://www.hrw.org/report/2022/05/25/how-dare-they-peep-my-private-life/childrens-rights-violations-governments.

Still, my wariness about AI hype prepared me for a negative experience and I’ve experienced something relatively positive.

Merci Alex for giving this a thorough trial run and sharing your results. Like you, my lights have been wary (or weary) of the AI Hype. Here it is not super evident that there is something like that going on, I’d like to think that as the site info explains, it might be able to surface more than what direct keywords can produce.

I wonder what use beyond casual/“this is neat” efforts we can run. Would it work to solicit under and ongoing series some research interests in this community? We can suggest different means to explore it, not just this one tool.

(if you cannot tell I am, always trying to stir this community space cauldron to find activities that would draw in more participation).

As far as this

I might call this Human Unintelligence or more typically, my poor keyboard skills. But now I like it as a some kind of portmanteau, perhaps worthy of being illustrated as you have done.

Many thanks again.

1 Like

Yes.
OEG members actively engaged in academic research around OE surely have a lot to contribute… as does anyone who’s been asking questions about OE, even without a thorough background in research methods. Here, I’m thinking about both the people who tend to present research results at conferences and junior learners who are discovering Open Education, reacting in fresh ways. They often ask the best questions, which could make for a co-design activity.

It also means that we can engage with doubters, naysayers, detractors. Their tough questions are as biased as those of staunch advocates. When we want to falsify our hypotheses, it’s useful to “think like them”. Easier to do it by engaging with them.

Personally, I’m quite intrigued by ideas of OE’s “pedagogical efficacy”. Especially since it’s really tricky to assess. And it might become difficult to hear.
One side of this is about OER use. Last Summer, Wiley posted his reaction to the lack of evidence of efficacy.
On the Relationship Between Adopting OER and Improving Student Outcomes – improving learning (opencontent.org)
Not surprising, for people who have been telling the “Open Content guy” that it’s not merely about using content. Key to that post is the instructor effect, perceived as a “confounding factor”. Somebody else might say that this is precisely where we get into Open Pedagogy, including the prominent approaches based on creating OERs with learners.

So, we might still phrase research questions around issues of OE’s pedagogical efficacy using the apparatus of randomized controlled trials. Which doesn’t preclude the use of more nuanced approaches to participatory-action research, enabling learners to be agents in the research projects. Then, research questions could sound like Design Thinking exercises (“How Might We”). Not sure AI tools would prove that useful, then. At the same time, y’never know.

Stranger Things have happened.

Somewhat promotional, but an email from Elicit shared this interview and demo of how Write a Research Proposal in 2 Days with Elicit

In the video, Jan Hendrick Kircher, former PhD shows how he uses Elicit to quickly write research proposals.

Jan’s method covers how to quickly get to grips with the background and issues of your proposed research. With this comes a short literature review and a summary of key debates and developments in the field. All of this is made quicker with Elicit. He explains how he takes the academic papers he finds with Elicit.org and summarizes them in his notetaking app of choice, Roam Research. Using Elicit to quickly uncover all of the relevant academic literature, assessing what problems and issues are to be explored and why they are worth exploring are a piece of cake.

AI Good? AI Bad? It’s not that simple…

Gracias por el aporte del aplicativo Elicit. Felicitaciones por compartir.

Some days ago I tried an openai tool for a similar purpose. It generated some citations which were all non-existent as I tried to verify by visiting the journal site. THen I asked the app to provide DOI, it returned results with DOI but they too were non-existent.

That sounds much more artificial than intelligent! Which AI tool was this? If it is the OpenAI ChatGPT tool many are discussing now, I doubt you could rely on its results as it is generating content based on patterns, not database sources.

I know that Elicit and ExplainPaper make use of Semantic Scholar which includes reliable references.

Here’s something I tried with Elicit: I asked elicit.org https://elicit.org this question: Has the use of OER textbooks and Zero Textbook Cost pathways improved student success rates?

Another version of the search statement: How do OER textbook and Zero Textbook Cost pathway students perform academically compared to students who use traditional textbooks?

I got summaries of the top 4 papers, a list of references that can be downloaded into Zotero.

With a click, I can find more detail about specific papers including things such as an abstract, intervention, number of participants, etc. What about the quality of the papers retrieved? They look good!

Clicking “Task” in the upper right of the results list suggestions for doing a literature review, brainstorming research questions to get more specific and related versions of your question, suggestions for search terms, etc. Of course I would check other search tools for lit on this topic, but Elicit’s a good start.

**[Tutorial for Elicit by Mustaq Bilal on Twitter]

Hi Ilene! I am encouraged to hear this from a librarian and wonder more about their perspective on this and related tools. Where does it fit or will it in terms of research tools?

If I understand correctly, the magic AI is able to cast a search net maybe a bit wider than what we get from keyword and filter selection. And like you, I found the results provided a few I might not have found directly. In this case, it’s a helping aid, not acting like an authority. Maybe.