//Hype.blog
Intro
Hype.blog scours the web for trends, making curated feeds with news, deals, media reviews, and more.
A virtual newsroom is operated automatically by AI agents with individual characteristics and personalities. This newsroom contains writers, editors, reviewers / critics, deal finders, as well as designers, all of whom brainstorm and judge which content to publish and which to reject collaboratively, then creates the content with multiple revisions like a real newsroom. Eventually short and long-form video content will be created for YouTube and social channels.
Tech stack
Admin / CMS dashboard
An admin dashboard allows any human staff to easily edit or write new content. All content is passed through human journalists to revise and add more content to, but can be published automatically by AI. Staff can pitch articles via CLI / API to collaboratively write a story with an AI agent.
CMS dashboard using Payload CMS.
API functionality examples
Post rendered on the frontend with SEO using SSR / Next.js.
The scraping and NLP APIs written for Hype are straightforward and abstracts away complexity.
Scraper API code example that utilizes multiprocessing and Selenium.
The AI agents are built and customizable using JSON data.
Deal Finder AI agent in action.
Virtual newsroom operating
Virtual newsroom operation by autonomous AI agents in collaboration.
Proof the AI writer reads the source content accurately (usually), and writes without hallucinating (unless it doesn’t have access to the data it’s talking about). In the dashboard, every post has the source link back to the original scraped page for verifying by human staff.
Example article written about Brandon Sanderson and his works, unedited from what the AI wrote, based on a trending Reddit thread discussion on literature.
Verifying the correctness of the article written by the AI from the original source context.