Researchers have developed big algebras, a new mathematical tool that connects abstract algebra and geometry, enabling unprecedented insights into symmetry groups. This breakthrough has the potential to strengthen the connection between quantum physics and number theory.
Researchers at Peking University developed a dual-IMC scheme to accelerate machine learning and improve energy efficiency. The new computing scheme stores both neural network weights and inputs in memory, reducing data movement and power consumption.
Researchers extend spatially incoherent diffractive networks to perform complex-valued linear transformations with negligible error, opening up new applications in fields like autonomous vehicles. This breakthrough enables the encryption and decryption of complex-valued images using spatially incoherent diffractive networks.
Apple iPhone 17 Pro
Apple iPhone 17 Pro delivers top performance and advanced cameras for field documentation, data collection, and secure research communications.
Researchers developed an easy-to-use optical chip that can configure itself for different functions, enabling optical neural network applications. The chip achieves positive real-valued matrix computation and demonstrates optical routing, low-loss light energy splitting, and matrix computations.
Researchers designed a simplified Mach-Zehnder interferometer mesh for real-valued matrix-vector multiplication, reducing hardware requirements and energy consumption. The new mesh detects incoherent light and is scalable, making it suitable for large-scale optical neural networks.
Researchers developed a new photonic blockchain called LightHash that uses a silicon photonics chip to reduce energy consumption in cryptocurrency mining. The approach could enable low-energy optical computing, reducing data centers' energy consumption and paving the way for more eco-friendly cryptocurrencies.
Researchers developed a novel design for the chip using a crossbar layout, outperforming state-of-the-art photonic counterparts in terms of scalability and technical versatility. The synergy of powerful photonics with the novel crossbar architecture enables next generation neuromorphic computing engines.
Researchers have developed a compact silicon photonic compute engine capable of computing tiled matrix multiplications at a record-high 50 GHz clock frequency. This achievement promises to contribute significantly to data center cybersecurity and enables real-time threat detection for malicious packets.
Anker Laptop Power Bank 25,000mAh (Triple 100W USB-C)
Anker Laptop Power Bank 25,000mAh (Triple 100W USB-C) keeps Macs, tablets, and meters powered during extended observing runs and remote surveys.
Researchers have developed a diffractive optical processor that can compute hundreds of transformations in parallel using wavelength multiplexing. The processor, which is powered by light instead of electricity, can execute multiple complex functions simultaneously at the speed of light.
A team of researchers at Harvard University has developed an ionic circuit that performs analog matrix multiplication, a key operation in neural networks, using ions in liquid. The breakthrough uses a pH-gated ionic transistor and expands to a 16x16 array for more complex computations.
Researchers have developed a photonic chip that can perform large complex-valued matrix-vector multiplications, breaking the bottleneck in traditional optical computing schemes. The chip has great potential for applications in artificial intelligence computing, such as image convolution and discrete Fourier transforms.
SAMSUNG T9 Portable SSD 2TB
SAMSUNG T9 Portable SSD 2TB transfers large imagery and model outputs quickly between field laptops, lab workstations, and secure archives.
Researchers at Rice University have optimized artificial intelligence software to run on commodity processors and train deep neural networks up to 15 times faster than top GPU trainers. The 'sub-linear deep learning engine' (SLIDE) uses hash tables to solve the search problem of matrix multiplication, reducing training time for AI models.
DistME developed by DGIST team is a fast and elastic distributed matrix computation engine using GPUs. It can analyze 100 times larger matrix data than SystemML, achieving 6.5-14x speedup over ScaLAPACK.
Researchers at MIT and Stanford University present a new system for protecting genomic data privacy in large-scale biomedical studies. The system uses secret sharing to divide sensitive data among multiple servers, enabling efficient privacy protection for millions of genomes.
Researchers developed a system called Taco that generates optimized code for tensor algebra operations on sparse data, offering a 100-fold speedup over existing software. The system automatically optimizes code by tracking and discarding zero entries, reducing wasted computation.
Davis Instruments Vantage Pro2 Weather Station
Davis Instruments Vantage Pro2 Weather Station offers research-grade local weather data for networked stations, campuses, and community observatories.
A team of MIT researchers has developed a new approach to deep learning computations using light instead of electricity, potentially improving speed and efficiency for certain applications. The new programmable nanophotonic processor uses multiple light beams to carry out complex calculations with zero energy and near-instant results.
The new chip performs real-time HEVC encoding and decoding, enabling four times the resolution of current TVs. It achieves this through pipelining and matrix multiplication to reduce computational complexity.