![]()
Alongside Anthropic’s Claude offering, a new DeepSeek study looks into compressing large image-based text documents for AI processing.
Global AI darlings Anthropic and DeepSeek have announced new updates to their arsenal shortly after their last series of major launches.
OpenAI-rival Anthropic has released a beta research preview for developers to access Claude Code on the web – meaning bug backlogs, routine fixes or parallel development work can be run on Anthropic-managed cloud infrastructure. Developers can connect their GitHub repositories, describe what they need, and allow Claude to implement solutions.
Additionally, Claude Code is also being made available on iOS devices. Although the experiences are expected to be tweaked as users submit feedback to Anthropic.
According to Anthropic, all Claude Code tasks will run on an isolated sandbox environment with network and filesystem restrictions.
Moreover, GitHub interactions will be handled through a secure proxy service which restricts Claude to access only the repositories authorised by users, the company added. Claude Code for the web is available now for Pro and Max users.
Anthropic, still fresh from its $183bn valuation and a preliminarily approved $1.5bn landmark copyright settlement, just launched the newest model in its Claude series late last month.
The company claimed that the new Claude Sonnet 4.5 was the “best” at coding in the world. Meanwhile, more recently, it launched an updated version of its cheapest AI model Claude Haiku, the Claude Haiku 4.5.
Earlier this month, Reuters reported that the company is set to hit $9bn in annual revenue run rate by the end of the year, while projecting almost tripling its annualised revenue in 2026. This is largely due to its growing demand with the enterprise sector.
Recently, Anthropic announced its largest enterprise deal to date – an expanded alliance with Deloitte to make Claude available to Deloitte’s 470,000-strong workforce, and to develop new “industry-specific solutions” powered by Claude.
Meanwhile Chinese AI giant DeepSeek has built a new Optical Character Recognition (OCR) system that can compress large image-based text documents to help AI models process more data. DeepSeek-OCR is presented as a feasibility study into compressing long contexts through optical 2D mapping.
The OCR is made up of two components – the DeepEncoder, serving as the core of the engine, and a decoder. For training and assessing the OCR, DeepSeek researchers used 30m PDF pages in around 100 languages, along with synthetic diagrams, chemical formulas and geometric figures.
It found that when the number of text tokens were under 10-times that of the vision tokens, the model achieved a decoding precision of 97pc. While when vision tokens were 20-times that of text tokens, the model achieved an accuracy-level of around 60pc.
According to DeepSeek, this study shows promising signs when it comes to developing future vision language models and large language models.
The study comes just after the company launched what was thought to be its most important product release since V3 and RI. Its latest “experimental” model, V3.2-Exp, is designed as an “intermediate step” toward its next-generation architecture, according to the company.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.


