Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Tether announces QVAC, a cross-platform BitNet LoRA framework: enabling training of billion-parameter AI models on consumer-grade devices
Odaily Planet Daily reports that according to an official announcement, Tether has launched a cross-platform BitNet LoRA fine-tuning framework within QVAC Fabric, optimized for training and inference of Microsoft BitNet (1-bit LLM). This framework significantly reduces computational power and memory requirements, enabling billion-parameter models to be trained and fine-tuned on laptops, consumer GPUs, and smartphones.
This is the first time that the BitNet model has been fine-tuned on mobile GPU devices (including Adreno, Mali, and Apple Bionic). Tests show that a 125M parameter model can be fine-tuned in about 10 minutes, a 1B model in roughly an hour, and even scaled up to a 13B parameter model on smartphones.
Additionally, the framework supports heterogeneous hardware such as Intel, AMD, and Apple Silicon, and for the first time achieves 1-bit LLM LoRA fine-tuning on non-NVIDIA devices. In terms of performance, BitNet models infer 2 to 11 times faster on mobile GPUs compared to CPUs, while reducing VRAM usage by up to approximately 77.8% compared to traditional 16-bit models.
Tether states that this technology is expected to break dependence on high-end computing power and cloud infrastructure, promote decentralized and localized AI training, and lay the foundation for new applications like federated learning.