Aaron Tay
@aarontay.bsky.social
I'm librarian + blogger from Singapore Management University. Social media, bibliometrics, analytics, academic discovery tech.
tried again with "research mode" so it thinks harder.
November 8, 2025 at 7:40 PM
tried again with "research mode" so it thinks harder.
Seems to do okay getting to the figure or table (3)
November 8, 2025 at 7:27 PM
Seems to do okay getting to the figure or table (3)
Results are hit and miss. Here I ask it find the last sentence of a paywalled wiley journal, you can see from the json query and response it does find the paper. But it struggles to find the chunk with the last sentence. (2)
November 8, 2025 at 7:17 PM
Results are hit and miss. Here I ask it find the last sentence of a paywalled wiley journal, you can see from the json query and response it does find the paper. But it struggles to find the chunk with the last sentence. (2)
Trying out the Wiley Scholar Gateway - a MCP server that allows claude to search Wiley full-text directly as a tool. Currently in beta, Claude can send queries to the MCP server which will return chunks of texts (the usual vector embedding match) to help Claude answer the question. (1)
November 8, 2025 at 7:13 PM
Trying out the Wiley Scholar Gateway - a MCP server that allows claude to search Wiley full-text directly as a tool. Currently in beta, Claude can send queries to the MCP server which will return chunks of texts (the usual vector embedding match) to help Claude answer the question. (1)
Research commons + Preprint Citation index = Web of Science Platform coverage is much less selective. Even though by default even if you search using "all databases" the default is to still exclude these two databases and you must explicitly remove the exclusion (2)
November 3, 2025 at 9:49 AM
Research commons + Preprint Citation index = Web of Science Platform coverage is much less selective. Even though by default even if you search using "all databases" the default is to still exclude these two databases and you must explicitly remove the exclusion (2)
Very interesting web of science now as "Research Commons" clarivate.com/academia-gov... Add 32M more metadata records, +21% more journal content! This pulls from open sources like OpenAlex @crossref.bsky.social ! (1)
November 3, 2025 at 9:47 AM
Very interesting web of science now as "Research Commons" clarivate.com/academia-gov... Add 32M more metadata records, +21% more journal content! This pulls from open sources like OpenAlex @crossref.bsky.social ! (1)
There is supposed to be also better entity recognization, so it knows when your input is author, journal, institution, topic etc. I getting mixed results. For example, I cant get it to do . webofscience.zendesk.com/hc/en-us/art...
November 3, 2025 at 9:38 AM
There is supposed to be also better entity recognization, so it knows when your input is author, journal, institution, topic etc. I getting mixed results. For example, I cant get it to do . webofscience.zendesk.com/hc/en-us/art...
Also handles basic Boolean input not just natural language
November 3, 2025 at 9:32 AM
Also handles basic Boolean input not just natural language
Cool web of science smart search is now more transparent , shows how it interpretes your input (Boolean part). webofscience.zendesk.com/hc/en-us/art...
November 3, 2025 at 9:30 AM
Cool web of science smart search is now more transparent , shows how it interpretes your input (Boolean part). webofscience.zendesk.com/hc/en-us/art...
Moara.io or "mother of all research assistants" is surprising to me. Point 2 (not training on content is standard), but #1, #3 & #4 is a conservative stance on AI. In fact, it looks more like Covidence competitor than Elicit competitor. (1)
johnfrechette.substack.com/p/the-bottom...
johnfrechette.substack.com/p/the-bottom...
November 3, 2025 at 4:10 AM
Moara.io or "mother of all research assistants" is surprising to me. Point 2 (not training on content is standard), but #1, #3 & #4 is a conservative stance on AI. In fact, it looks more like Covidence competitor than Elicit competitor. (1)
johnfrechette.substack.com/p/the-bottom...
johnfrechette.substack.com/p/the-bottom...
📣 Registration OPEN for #FORCE2026 (3–5 Jun, Singapore). A conference on the future of research communication & open science.
Early-bird: by 28 Feb 2026.
Call for Proposal: Ends 9 Nov 2025. Authors get a special rate. Details & CFP force11.org/force2026/
#FORCE2026 #ScholarlyCommunication
Early-bird: by 28 Feb 2026.
Call for Proposal: Ends 9 Nov 2025. Authors get a special rate. Details & CFP force11.org/force2026/
#FORCE2026 #ScholarlyCommunication
November 2, 2025 at 6:41 PM
📣 Registration OPEN for #FORCE2026 (3–5 Jun, Singapore). A conference on the future of research communication & open science.
Early-bird: by 28 Feb 2026.
Call for Proposal: Ends 9 Nov 2025. Authors get a special rate. Details & CFP force11.org/force2026/
#FORCE2026 #ScholarlyCommunication
Early-bird: by 28 Feb 2026.
Call for Proposal: Ends 9 Nov 2025. Authors get a special rate. Details & CFP force11.org/force2026/
#FORCE2026 #ScholarlyCommunication
As a librarian, I had a hazy understanding of terms like keyword vs full-text search. Part of it is the multiple ways people use "keyword search". Notably I used to think keyword search must be Boolean.
November 2, 2025 at 6:26 PM
As a librarian, I had a hazy understanding of terms like keyword vs full-text search. Part of it is the multiple ways people use "keyword search". Notably I used to think keyword search must be Boolean.
Unfortunately, the free version has limits. e.g. You can only have one project (defeats purpose of having colors). You lack advanced search settings. Free version can only search from up to 50 vs 300 inputs. (interface seems to use 20?) (6)
October 30, 2025 at 9:12 AM
Unfortunately, the free version has limits. e.g. You can only have one project (defeats purpose of having colors). You lack advanced search settings. Free version can only search from up to 50 vs 300 inputs. (interface seems to use 20?) (6)
You can do "similar", "References", "cites" for individual papers or for all papers in the collection similarly in research rabbit. But what's really nice is you can filter for both by keyword, SJR quartile, journal h-index, whether it is open access and even exclude retractions! (5)
October 30, 2025 at 9:05 AM
You can do "similar", "References", "cites" for individual papers or for all papers in the collection similarly in research rabbit. But what's really nice is you can filter for both by keyword, SJR quartile, journal h-index, whether it is open access and even exclude retractions! (5)
It's also easy to switch the recommendation methods used between "similar", "references", "cites" - I suspect "similar" here is title/abstract semantic similarity if this maps to what Litmap does. So you now combine text similarity + citations, to find related papers - very powerful (4)
October 30, 2025 at 9:01 AM
It's also easy to switch the recommendation methods used between "similar", "references", "cites" - I suspect "similar" here is title/abstract semantic similarity if this maps to what Litmap does. So you now combine text similarity + citations, to find related papers - very powerful (4)
The visualization is improved by basically lifting off LitMaps, the x-axis and y-axis as well as size of nodes are extremely customizable similar to Litmap. Lots of clever things you can do by changing axis to reflect dates, citations, references etc. (3)
October 30, 2025 at 8:58 AM
The visualization is improved by basically lifting off LitMaps, the x-axis and y-axis as well as size of nodes are extremely customizable similar to Litmap. Lots of clever things you can do by changing axis to reflect dates, citations, references etc. (3)
The main improvement I think is it hides all past rabbit holes, the old one you would have columns and columns of past iterations, now it is all hidden until you press the icons on top which is smart (2)
October 30, 2025 at 8:55 AM
The main improvement I think is it hides all past rabbit holes, the old one you would have columns and columns of past iterations, now it is all hidden until you press the icons on top which is smart (2)
Results attached. Citation retrieval test is somewhat acceptable when asked to simulanteously retrieve 3 (<10%) but gets really bad when asked to retrieve 10. Failure rate ranges from 48-98% (6)
October 29, 2025 at 5:12 PM
Results attached. Citation retrieval test is somewhat acceptable when asked to simulanteously retrieve 3 (<10%) but gets really bad when asked to retrieve 10. Failure rate ranges from 48-98% (6)
100% human evaluated no LLM as judge. "A response fails when it produces incorrect outputs or omits required information. Task-specific criteria can be seen in attached image (5)
October 29, 2025 at 5:08 PM
100% human evaluated no LLM as judge. "A response fails when it produces incorrect outputs or omits required information. Task-specific criteria can be seen in attached image (5)
Add the other 2 subtasks of paper reading task - Open-Domain Question Answering. and claim verficiation task without any comment (5)
October 29, 2025 at 5:07 PM
Add the other 2 subtasks of paper reading task - Open-Domain Question Answering. and claim verficiation task without any comment (5)
Task 2 is content extraction. Here they ask LLM to extract from 140 papers across many disciplines that have pdfs freely available online. Ask to extract-Last sentence of the introduction, number of fig/table, caption of fig/paper. Unlike content retrieval, this is done paper by paper (4)
October 29, 2025 at 5:03 PM
Task 2 is content extraction. Here they ask LLM to extract from 140 papers across many disciplines that have pdfs freely available online. Ask to extract-Last sentence of the introduction, number of fig/table, caption of fig/paper. Unlike content retrieval, this is done paper by paper (4)
"Citation retrieval evaluates whether models can locate papers by their exact titles and generate accurate bibliographic information" Query retrieve BibTeX of 3,5,10 papers simultaneously from papers in CS/arxiv. They checked bing/google [paper title] site:arxiv.org returns top result with paper(3)
October 29, 2025 at 5:00 PM
"Citation retrieval evaluates whether models can locate papers by their exact titles and generate accurate bibliographic information" Query retrieve BibTeX of 3,5,10 papers simultaneously from papers in CS/arxiv. They checked bing/google [paper title] site:arxiv.org returns top result with paper(3)
Cool, now has option to include non-core publications. Open edition also includes more universities. I can confirm, for the first time my university makes the cut and appears in open edition (not traditional).
October 29, 2025 at 4:10 PM
Cool, now has option to include non-core publications. Open edition also includes more universities. I can confirm, for the first time my university makes the cut and appears in open edition (not traditional).
Clearly fact based questions are easier, while fast moving, complicated " topics that require clear distinction between facts and
opinions and proper attribution of claims" are more tricky (same for humans) (4)
opinions and proper attribution of claims" are more tricky (same for humans) (4)
October 26, 2025 at 6:51 PM
Clearly fact based questions are easier, while fast moving, complicated " topics that require clear distinction between facts and
opinions and proper attribution of claims" are more tricky (same for humans) (4)
opinions and proper attribution of claims" are more tricky (same for humans) (4)
It is important to note that "issues" here is very wide ranging . I am not sure I agree all of them is considered an issue (3)
October 26, 2025 at 6:50 PM
It is important to note that "issues" here is very wide ranging . I am not sure I agree all of them is considered an issue (3)