Just submit your XML sitemap into a python class, and it will do the crawling, chunking, vectorizing and storage in an SQLite file for you. It's using SQLiteVSS integration with Langchain, but thinking of moving away from it, and do an integration with the new sqlite-vec instead.
A relational crawler on a particular subject with nuanced, opaque, seemingly-temporally-unrelated connections that show a particular MIC conduction of acts::
"Follow all the congress members who have been a part of a particular committee, track their signatory/support for particular ACTs that have been passed, and look at their investment history from open data, quiver, etc - and show language in any public speaking talking about conflicts and arms deals occurring whereby their support of the funding for said conflicts are traceable to their ACTs, committee seat, speaking engagements, investment profit and reporting as compared to their stated net worth over each year as compared to the stated gains stated by their filings for investment. Apply this pattern to all congress, and their public-profile orbit of folks, without violating their otherwise private-related actions."
And give it a series of URLs with known content for which these nuances may be gleaned.
Or have a trainer bot that will constantly only consume this context from the open internet over time such that you can just have a graph over time for the data...
PYTHON: Run it all through txtai / your library ? nodes and ask questions of the data in real time?
(And it reminds me of the work of this fine person/it::
I tried it out. This would be extremely useful to me to the point I'd be willing to happily pay for it, as it's something I would have otherwise had to spend a long time hacking together.
1) The returned output from a query seems pretty limited in length and breadth.
2) No apparent way to adjust my prompts to improve/adjust the output e.g. not really 'conversational' (not sure if that is your intent)
Otherwise keep developing and be sure to push update notifications to your new mailing list! ;-)
Agree with this. I also think the emphasis here (to OP) should be "I'd be willing to happily pay for it" - ie I'd rather be paying a reasonable amount each month for something that is going to remain active that have the large (current) disparity between "free" and "enterprise". I'd say make some middle tiers of (I don't know?) $5 / $10 / $20 a month for reasonable numbers of queries or whatever. Keep the "enterprise" offering there for the biggies, but offer us small players some hope that this will be sufficiently funded / supported.
Thanks! The chat demo is actually just a small thing I put together as a preview of what can be done, but the main product is the API. But seeing that most users seem to like that, there's probably something there...
If you want to email me at support at embedding.io with some requirements, I can see how to make that work for you.
I doubt. For larger players, data is valuable - so they are preventing scraping already (eg reddit, linkedin). For smaller websites there’s also not much of an incentive.. Maybe hosting providers will help with preventing scraping? like ddos protection
In agree that this niche is DOA. No offence to OP but the barrier for entry to this stuff is low. I built basically the same thing over a weekend for personal use. React frontend, python server, chroma for embeddings, sqlite cache, switch between open AI and anthropic (I want to add llama for full local execution when I get a better pc). I have a local SPA with named "projects", can configure crawl depth from a start page, I can set my crawl rate, don't have to pay to use it, can choose any provider I want... I'm just one guy and that took a day to get working plus a bit of polish.
I would guess the hardest thing by far in developing the advertised product would be user management, authentication, payments and wrapping the subscription model's business logic around the core loop. And probably scaling, as running embeddings over hundreds of scraped pages adds up quickly when free tier users start hammering you.
My question when deciding to sell something I've built is, if building the service model is harder than building the actual service, where is the value add?
My take on the natural evolution is that collating and caching documents, websites etc for search (with source attribution ideally) is a problem that will I think ultimately be solved by OS vendors. Why sign up for SaaS and expose all your content to untrustworthy 3rd parties, when it's built right in and handled by your "trusty" OS.
In the meantime, I reckon someone more dedicated than me will (or probably already has) open source something like I built but better, probably as a CLI tool, which will eventually reach maturity and be stolen cough I mean adopted by the top end of town.
Ethically I think nothing's changed for centuries in regards to plagiarism and attribution. It gets easier to copy work and thinking, but it also ultimately gets easier to acknowledge sources. Good folk will do the right thing as they always have done.
Regarding efficiency, I think tools like this have a place in making access to relevant and summarised knowledge during general research more efficient, when doing the broad strokes to find areas of interest to zoom in on, when more traditional approaches take over.
Interesting times anyway. I have to give credit to people that try, but I'm taking a back seat in thinking of ideas to productise in this space, as by the time I've thought it through, something new comes along that instantly makes it obsolete.
I spent a lot of time thinking about how to manage embeddings for docs sites. This is basically the same solution that I landed on but never got around to shipping as a general-purpose product.
A key question that the docs should answer (and perhaps the "How it works" page too): chunking. You generate an embedding for the entire page? Or do you generate embeddings for sections? And what's the size limit per page? Some of our docs pages have thousands of words per page. I'm doubtful you can ingest all that, let alone whether the embedding would be that useful in practice.
I like this. Abstracting away the management of embeddings and vector database is something I desperately want, and adding in website crawling is useful as well.
Responding just because it's a pet peeve of mine: Cixin Liu did not invent the dark forest hypothesis. People were discussing it, and writing science fiction books about it, for decades before the 3BP books were published. Nothing against him, and he definitely helped popularize the concept, but I think it's incorrect to refer to it as "Cixin Liu's hypothesis".
> I feel the more of these services come to being, the more likely it is that every website starts putting up gates to keep the bots away
That's why we need microtransactions, because I'd rather be able to have both nice AI services and useful data repositories that they pull from, than have to choose just one. (and that one would be AI services, because you can't stop all the scrapers, so data sources will just keep tightening their restrictions)
Does anyone know of a way to do this locally with Ollama? The 'chat with documentation' thing is something I was thinking of a week ago when dealing with hallucinating cloud AI. I think it'd be worth the energy to embed a set of documentation locally to help with development
Yes, Langchain has tooling specifically for connecting to Ollama that can be chained with other tooling in their library to pull in your documents, chunk them, and store them for RAG. See here for a good example notebook:
With these early stage startups it often means they haven’t really figured out how to price their product and will cut you a very generous deal if you push a bit
I would pay for the opposite product: make your website completely unusable/unreadable by LLMs while readable by real humans, with low false positive rates.
How are you deciding on the best RAG configuration for your app? How you decide chunking strategy, embedding and retrievers for your app?
Check out our open source tool-RAgBuilder that allows developers get to the top performing RAG for their data
https://news.ycombinator.com/item?id=41145093
How does this handle changes to the website? Does it re-crawl the whole site periodically and regenerate the embeddings? Or is there some sort of diff-checker that only picks up pages that have changed, added, or deleted?
Interesting, I wanted to do this for a personal use case (mostly learning), but with PDFs. What's tech stack? I have explored using the AWS AI tools, but it seems a bit overkill for what I want it to do.
Tech stack is a mix of serverless Laravel, with Cloudflare and AWS functions, and some Pinecone for vector storage. Still experimenting on a few things but don't want to over-engineer unless I know where I'm going.
Nice! What's the underlying model / RAG approach being used? Be good to understand that part as presumably it will have a big impact on performance / usability of the results.
I feel like this is unethical. You built yet another bot scraper. It would only be an ethical tool if it validated I own the website I am scraping before it starts.
This is probably a losing direction - protecting your little island of content in the sea of internet and LLM outputs. Get more value by exposure. This is the trend of open source, wikipedia and open scientific publication. LLMs double down on the same collaborative approach to intelligence.
You can of course decouple from the big discussion and isolate your content with access restrictions, but the real interesting activity will be outside. Look for example the llama.cpp and other open source AI tools we have gotten recently. So much energy and enthusiasm, so much collaboration. Closed stuff doesn't get that level of energy.
I think IP laws are in for a reckoning, protecting creativity by restricting it is not the best idea in the world. There are better models. Copyright is anachronic, it was invented in the era of the printing press when copying was made easy, LLMs remix they don't simply copy, even the name is unfitting for the new reality. We need to rename it remixright.
I like the concept, the documentation is very good and I even enjoy the domain name. This is an excellent launch and congratulations on getting it out.
Can I query multiple vectorized websites at once? Can I export vectorized websites and host them myself? Any chance to export them to a no-code format, like PDF?
You can group as many websites as you want into a collection. Then query that collection.
Not sure what you mean by exporting; you would like to export the vectors themselves? Or just the chunks of text from the websites?
I find it interesting that as an (edit: UK) academic researcher, I would be likely be forbidden to use tools like this, that fail basic ethics standards, regulations such as GDPR, and practical standards such as respecting robots.txt [given there's no information on embedding.io, it's unlikely I can block the crawler when designing a website].
There's still room for an ethical development of such crawlers and technologies, but it needs to be consent-first, with strong ethical and legal standards. The crazy development of such tools has been a massive issue for a number of small online organisations that struggle with poorly implemented or maintained bots (as discussed for OpenStreetMap or Read The Docs).
I'm less convinced. Are you saying it's unethical to automate browsing a site?
Because if you save the pages you browse on some site, they're yours (authors don't own your cache).
Perhaps you're arguing that if you wrote a lightweight script/browser (which is just your user agent) to save some website for offline use, that'd be unethical and GDPR violating? Again, I don't think so but maybe I'm missing something. But perhaps this turns on what defines a "user agent".
Perhaps this becomes a "depth of pre-fetch" question. If your browser prefetches linked pages, that's "automated" downloading, akin to the script approach above. Downloading. To your cache. Which you own. (Where I struggle to see an ethical violation)
Genuinely curious where the line is, or what exactly here is triggering ethics, GDPR and practical standards?
I do this with https://mitta.ai by using a Playwright container that does a callback to a pipeline that uses either meta data from the PDF or sends it to an EasyOCR deployment on a GPU instance on Google for text extraction. Then I use a custom chunker and instructor/xl embeddings.
All of that code is Open Source, and works well for most sites. Some sites block Google IPs, but the Playwright container can run locally, so should be able to work around it
with some minimal effort.
Give it URLs or domains, and it will crawl and extract their content, embed them in a vector database, and give you an endpoint that you can then query when doing RAG stuff or semantic search.
The example API key on the page is decoded to "WOW YOU'RE A HACKER" :)
that == at the end looked like a base64 encoded string ;)
I built a similar thing as a python library that does just that: https://github.com/philippe2803/contentmap
Blog post that explains the rationale behind the library: https://philippeoger.com/pages/can-we-rag-the-whole-web
Just submit your XML sitemap into a python class, and it will do the crawling, chunking, vectorizing and storage in an SQLite file for you. It's using SQLiteVSS integration with Langchain, but thinking of moving away from it, and do an integration with the new sqlite-vec instead.
This is part of a dream of a tool I would like:
A relational crawler on a particular subject with nuanced, opaque, seemingly-temporally-unrelated connections that show a particular MIC conduction of acts::
"Follow all the congress members who have been a part of a particular committee, track their signatory/support for particular ACTs that have been passed, and look at their investment history from open data, quiver, etc - and show language in any public speaking talking about conflicts and arms deals occurring whereby their support of the funding for said conflicts are traceable to their ACTs, committee seat, speaking engagements, investment profit and reporting as compared to their stated net worth over each year as compared to the stated gains stated by their filings for investment. Apply this pattern to all congress, and their public-profile orbit of folks, without violating their otherwise private-related actions."
And give it a series of URLs with known content for which these nuances may be gleaned.
Or have a trainer bot that will constantly only consume this context from the open internet over time such that you can just have a graph over time for the data...
PYTHON: Run it all through txtai / your library ? nodes and ask questions of the data in real time?
(And it reminds me of the work of this fine person/it::
https://mlops.systems/#category=isafpr
https://mlops.systems/#category=afghanistan
I know sqlite-vss has been upgraded lately. But, it was unstable for a while prior. Are you having good experiences with it?
I tried it out. This would be extremely useful to me to the point I'd be willing to happily pay for it, as it's something I would have otherwise had to spend a long time hacking together.
1) The returned output from a query seems pretty limited in length and breadth.
2) No apparent way to adjust my prompts to improve/adjust the output e.g. not really 'conversational' (not sure if that is your intent)
Otherwise keep developing and be sure to push update notifications to your new mailing list! ;-)
Agree with this. I also think the emphasis here (to OP) should be "I'd be willing to happily pay for it" - ie I'd rather be paying a reasonable amount each month for something that is going to remain active that have the large (current) disparity between "free" and "enterprise". I'd say make some middle tiers of (I don't know?) $5 / $10 / $20 a month for reasonable numbers of queries or whatever. Keep the "enterprise" offering there for the biggies, but offer us small players some hope that this will be sufficiently funded / supported.
Brilliant idea, btw, I like it :-)
Thanks! The chat demo is actually just a small thing I put together as a preview of what can be done, but the main product is the API. But seeing that most users seem to like that, there's probably something there... If you want to email me at support at embedding.io with some requirements, I can see how to make that work for you.
In my opinion this is a transitional niche.
Soon websites/apps whatever you want to call them will have their own built-in handling for AI.
It's inefficient and rude to be scraping pages for content. Especially for profit.
I think each website/data-source having their own built-in AI is also a transitional period.
It is like every website having search engine vs google.
I doubt. For larger players, data is valuable - so they are preventing scraping already (eg reddit, linkedin). For smaller websites there’s also not much of an incentive.. Maybe hosting providers will help with preventing scraping? like ddos protection
[flagged]
In agree that this niche is DOA. No offence to OP but the barrier for entry to this stuff is low. I built basically the same thing over a weekend for personal use. React frontend, python server, chroma for embeddings, sqlite cache, switch between open AI and anthropic (I want to add llama for full local execution when I get a better pc). I have a local SPA with named "projects", can configure crawl depth from a start page, I can set my crawl rate, don't have to pay to use it, can choose any provider I want... I'm just one guy and that took a day to get working plus a bit of polish.
I would guess the hardest thing by far in developing the advertised product would be user management, authentication, payments and wrapping the subscription model's business logic around the core loop. And probably scaling, as running embeddings over hundreds of scraped pages adds up quickly when free tier users start hammering you.
My question when deciding to sell something I've built is, if building the service model is harder than building the actual service, where is the value add?
My take on the natural evolution is that collating and caching documents, websites etc for search (with source attribution ideally) is a problem that will I think ultimately be solved by OS vendors. Why sign up for SaaS and expose all your content to untrustworthy 3rd parties, when it's built right in and handled by your "trusty" OS.
In the meantime, I reckon someone more dedicated than me will (or probably already has) open source something like I built but better, probably as a CLI tool, which will eventually reach maturity and be stolen cough I mean adopted by the top end of town.
Ethically I think nothing's changed for centuries in regards to plagiarism and attribution. It gets easier to copy work and thinking, but it also ultimately gets easier to acknowledge sources. Good folk will do the right thing as they always have done.
Regarding efficiency, I think tools like this have a place in making access to relevant and summarised knowledge during general research more efficient, when doing the broad strokes to find areas of interest to zoom in on, when more traditional approaches take over.
Interesting times anyway. I have to give credit to people that try, but I'm taking a back seat in thinking of ideas to productise in this space, as by the time I've thought it through, something new comes along that instantly makes it obsolete.
I spent a lot of time thinking about how to manage embeddings for docs sites. This is basically the same solution that I landed on but never got around to shipping as a general-purpose product.
A key question that the docs should answer (and perhaps the "How it works" page too): chunking. You generate an embedding for the entire page? Or do you generate embeddings for sections? And what's the size limit per page? Some of our docs pages have thousands of words per page. I'm doubtful you can ingest all that, let alone whether the embedding would be that useful in practice.
I chunk pages and generate embeddings for each chunk. So there's no real size limit per page.
I like this. Abstracting away the management of embeddings and vector database is something I desperately want, and adding in website crawling is useful as well.
I like this a lot!
But: I feel the more of these services come to being, the more likely it is that every website starts putting up gates to keep the bots away.
Sort of like a weird GenAI take on Cixin Liu's Dark Forest hypothesis (https://en.wikipedia.org/wiki/Dark_forest_hypothesis).
(Edited to add a reference.)
Responding just because it's a pet peeve of mine: Cixin Liu did not invent the dark forest hypothesis. People were discussing it, and writing science fiction books about it, for decades before the 3BP books were published. Nothing against him, and he definitely helped popularize the concept, but I think it's incorrect to refer to it as "Cixin Liu's hypothesis".
> I feel the more of these services come to being, the more likely it is that every website starts putting up gates to keep the bots away
That's why we need microtransactions, because I'd rather be able to have both nice AI services and useful data repositories that they pull from, than have to choose just one. (and that one would be AI services, because you can't stop all the scrapers, so data sources will just keep tightening their restrictions)
This would be amazing
Does anyone know of a way to do this locally with Ollama? The 'chat with documentation' thing is something I was thinking of a week ago when dealing with hallucinating cloud AI. I think it'd be worth the energy to embed a set of documentation locally to help with development
Yes, Langchain has tooling specifically for connecting to Ollama that can be chained with other tooling in their library to pull in your documents, chunk them, and store them for RAG. See here for a good example notebook:
https://github.com/langchain-ai/langchain/blob/master/cookbo...
https://github.com/open-webui/open-webui
Looks cool! anything about how it compares to similar RAG-as-a-service products? something I've been researching a little.
FWIW, the pricing model of jumping from free to "contact us" is slightly ominous.
With these early stage startups it often means they haven’t really figured out how to price their product and will cut you a very generous deal if you push a bit
Do you plan on doing revenue sharing with the site owners?
Do OpenAI and all the other LLM behemoths?
Could you support ingesting WARC files?
https://github.com/harvard-lil/warc-gpt
https://lil.law.harvard.edu/blog/2024/02/12/warc-gpt-an-open...
How are you deciding on the best RAG configuration for your app? How you decide chunking strategy, embedding and retrievers for your app? Check out our open source tool-RAgBuilder that allows developers get to the top performing RAG for their data https://news.ycombinator.com/item?id=41145093
How does this handle changes to the website? Does it re-crawl the whole site periodically and regenerate the embeddings? Or is there some sort of diff-checker that only picks up pages that have changed, added, or deleted?
Interesting, I wanted to do this for a personal use case (mostly learning), but with PDFs. What's tech stack? I have explored using the AWS AI tools, but it seems a bit overkill for what I want it to do.
Here's some code to deal with that:
https://github.com/MittaAI/SlothAI/blob/main/SlothAI/lib/pro...
https://github.com/MittaAI/mitta-community/tree/main/service...
There's code in there that just reads PDF meta data as well, but you can't always guarantee it's there in a PDF.
If the PDFS are textual or have OCR, then pdf2text from the Poppler suite ought to be enough? If not, add Tesseract/ocrmypdf to the pipeline?
Tech stack is a mix of serverless Laravel, with Cloudflare and AWS functions, and some Pinecone for vector storage. Still experimenting on a few things but don't want to over-engineer unless I know where I'm going.
Try aichat: https://github.com/sigoden/aichat
Nice! What's the underlying model / RAG approach being used? Be good to understand that part as presumably it will have a big impact on performance / usability of the results.
I feel like this is unethical. You built yet another bot scraper. It would only be an ethical tool if it validated I own the website I am scraping before it starts.
yes only big conglomerates now can scrape pages. If you're not google stealing the info then...... Right?
This is probably a losing direction - protecting your little island of content in the sea of internet and LLM outputs. Get more value by exposure. This is the trend of open source, wikipedia and open scientific publication. LLMs double down on the same collaborative approach to intelligence.
You can of course decouple from the big discussion and isolate your content with access restrictions, but the real interesting activity will be outside. Look for example the llama.cpp and other open source AI tools we have gotten recently. So much energy and enthusiasm, so much collaboration. Closed stuff doesn't get that level of energy.
I think IP laws are in for a reckoning, protecting creativity by restricting it is not the best idea in the world. There are better models. Copyright is anachronic, it was invented in the era of the printing press when copying was made easy, LLMs remix they don't simply copy, even the name is unfitting for the new reality. We need to rename it remixright.
Well, Google itself is just an unethical bot scrapper then...
I like the concept, the documentation is very good and I even enjoy the domain name. This is an excellent launch and congratulations on getting it out.
Can I query multiple vectorized websites at once? Can I export vectorized websites and host them myself? Any chance to export them to a no-code format, like PDF?
You can group as many websites as you want into a collection. Then query that collection. Not sure what you mean by exporting; you would like to export the vectors themselves? Or just the chunks of text from the websites?
I find it interesting that as an (edit: UK) academic researcher, I would be likely be forbidden to use tools like this, that fail basic ethics standards, regulations such as GDPR, and practical standards such as respecting robots.txt [given there's no information on embedding.io, it's unlikely I can block the crawler when designing a website].
There's still room for an ethical development of such crawlers and technologies, but it needs to be consent-first, with strong ethical and legal standards. The crazy development of such tools has been a massive issue for a number of small online organisations that struggle with poorly implemented or maintained bots (as discussed for OpenStreetMap or Read The Docs).
I'm less convinced. Are you saying it's unethical to automate browsing a site?
Because if you save the pages you browse on some site, they're yours (authors don't own your cache).
Perhaps you're arguing that if you wrote a lightweight script/browser (which is just your user agent) to save some website for offline use, that'd be unethical and GDPR violating? Again, I don't think so but maybe I'm missing something. But perhaps this turns on what defines a "user agent".
Perhaps this becomes a "depth of pre-fetch" question. If your browser prefetches linked pages, that's "automated" downloading, akin to the script approach above. Downloading. To your cache. Which you own. (Where I struggle to see an ethical violation)
Genuinely curious where the line is, or what exactly here is triggering ethics, GDPR and practical standards?
This is interesting. Can it work with any website, even say document repositories hosted on standard servers like gitbook?
It works with pretty much any website, and works well with docs hosted on GitBook yes, I have embedded a website that's hosted there.
Would you share the source? I want to use this for a private internal network of pages. How would that work?
Interested in seeing whether this will be widespread in 5 years or whether sites will have fought back.
Sites are already fighting back.
Twitter and Reddit locked down their APIs. Soon enough, you’ll need an account to even access any content
Would be great to use for developer documentation for various languages, frameworks and libraries.
Is there a way to deal with websites where you need to login? Like subscription based sites?
Unless you own those sites, I'm afraid that's not going to be possible.
[dead]
This looks interesting, but I get a 404 on the iframe when I try to go into the chat.
Sorry about that, a bit too much load at the moment
#1. Gratuitous self promotion (but also my honest best advice): The future of knowledge bases is ScrollSets: https://sets.scroll.pub/
#2. If you are interested in knowledge bases, see #1
So i provide a URL, your service does the crawling of the site?
Can it get content that is gated/behind login ?
How would expect it to do that??
Experiencing many Internal Server Errors.
> Enterprise: Contact Us
If there is no certifications or compliance information then I don't think there is anything to discuss about any enterprise plan.
Gotta start somewhere :)
Which LLM model is it using?
Will this work for forums?
How do I feed it a sitemap?
It currently will try to find a sitemap on its own. But I have on the roadmap to let users add their own.
How much does it cost?
does it embbed images as well? if not, do you plan to do so?
It doesn't embed images, no. But that's a great idea for the roadmap!
how does this work?
I do this with https://mitta.ai by using a Playwright container that does a callback to a pipeline that uses either meta data from the PDF or sends it to an EasyOCR deployment on a GPU instance on Google for text extraction. Then I use a custom chunker and instructor/xl embeddings.
All of that code is Open Source, and works well for most sites. Some sites block Google IPs, but the Playwright container can run locally, so should be able to work around it with some minimal effort.
Give it URLs or domains, and it will crawl and extract their content, embed them in a vector database, and give you an endpoint that you can then query when doing RAG stuff or semantic search.
There are a few ways. I built something similar huckai.com on top of vectara.com. They have open sourced their versions https://github.com/vectara
You can also do this on AWS now fairly easily. https://medium.com/data-reply-it-datatech/how-to-build-a-cus...
The lablab.ai Discord community is a pretty good place to learn how this product category is evolving.
any open source tools for doing just this?
Does it hallucinate much?
I made a similar open source app a year ago or so https://github.com/mkwatson/chat_any_site
Does this respect robots.txt?
I hope this gets answered.
Also I've checked their docs to see if there is any mention about the user agents or IP ranges they use for scraping, with no luck.
It does respect robots.txt when crawling. I'll add more details about this in the docs.
Valid question and I am sure it doesn't.
[flagged]
Can this be deployed on-prem or is it an cloud-toy?
Currently just a cloud-toy.