Subject: General Tech | August 27, 2014 - 12:34 PM | Jeremy Hellstrom
Tagged: chrome, 64-bit
The new version of Chrome can now supports 64-bit if you so choose to install that version of Google's browser. The ability to address more memory is not the only benefit to this new version, it is also optimized for the P9 codec used for Youtube HD which The Inquirer was told now processes 15% more quickly and they agreed that it felt generally faster when using the new browser to surf. The new version should also offer improved protection from memory layout vulnerabilities so it is certainly worth using on your 64 bit machine.
"GOOGLE'S 64-BIT EDITION of the Chrome web browser for Windows has been declared stable with the release of Chrome 37."
Here is some more Tech News from around the web:
- HP recalls six million laptop power cables due to fire risk @ The Inquirer
- Linux turns 23 and Linus Torvalds celebrates as only he can @ The Register
NVIDIA Reveals 64-bit Denver CPU Core Details, Headed to New Tegra K1 Powered Devices Later This Year
Subject: Processors | August 12, 2014 - 01:06 AM | Tim Verry
Tagged: tegra k1, project denver, nvidia, Denver, ARMv8, arm, Android, 64-bit
During GTC 2014 NVIDIA launched the Tegra K1, a new mobile SoC that contains a powerful Kepler-based GPU. Initial processors (and the resultant design wins such as the Acer Chromebook 13 and Xiaomi Mi Pad) utilized four ARM Cortex-A15 cores for the CPU side of things, but later this year NVIDIA is deploying a variant of the Tegra K1 SoC that switches out the four A15 cores for two custom (NVIDIA developed) Denver CPU cores.
The custom 64-bit Denver CPU cores use a 7-way superscalar design and run a custom instruction set. Denver is a wide but in-order architecture that allows up to seven operations per clock cycle. NVIDIA is using a custom ISA and on-the-fly binary translation to convert ARMv8 instructions to microcode before execution. A software layer and 128MB cache enhance the Dynamic Code Optimization technology by allowing the processor to examine and optimize the ARM code, convert it to the custom instruction set, and further cache the converted microcode of frequently used applications in a cache (which can be bypassed for infrequently processed code). Using the wider execution engine and Dynamic Code Optimization (which is transparent to ARM developers and does not require updated applications), NVIDIA touts the dual Denver core Tegra K1 as being at least as powerful as the quad and octo-core packing competition.
Further, NVIDIA has claimed at at peak throughput (and in specific situations where application code and DCO can take full advantage of the 7-way execution engine) the Denver-based mobile SoC handily outpaces Intel’s Bay Trail, Apple’s A7 Cyclone, and Qualcomm’s Krait 400 CPU cores. In the results of a synthetic benchmark test provided to The Tech Report, the Denver cores were even challenging Intel’s Haswell-based Celeron 2955U processor. Keeping in mind that these are NVIDIA-provided numbers and likely the best results one can expect, Denver is still quite a bit more capable than existing cores. (Note that the Haswell chips would likely pull much farther ahead when presented with applications that cannot be easily executed in-order with limited instruction parallelism).
NVIDIA is ratcheting up mobile CPU performance with its Denver cores, but it is also aiming for an efficient chip and has implemented several power saving tweaks. Beyond the decision to go with an in-order execution engine (with DCO hopefully mostly making up for that), the beefy Denver cores reportedly feature low latency power state transitions (e.g. between active and idle states), power gating, dynamic voltage, and dynamic clock scaling. The company claims that “Denver's performance will rival some mainstream PC-class CPUs at significantly reduced power consumption.” In real terms this should mean that the two Denver cores in place of the quad core A15 design in the Tegra K1 should not result in significantly lower battery life. The two K1 variants are said to be pin compatible such that OEMs and developers can easily bring upgraded models to market with the faster Denver cores.
For those curious, In the Tegra K1, the two Denver cores (clocked at up to 2.5GHz) share a 16-way L2 cache and each have 128KB instruction and 64KB data L1 caches to themselves. The 128MB Dynamic Code Optimization cache is held in system memory.
Denver is the first (custom) 64-bit ARM processor for Android (with Apple’s A7 being the first 64-bit smartphone chip), and NVIDIA is working on supporting the next generation Android OS known as Android L.
The dual Denver core Tegra K1 is coming later this year and I am excited to see how it performs. The current K1 chip already has a powerful fully CUDA compliant Kepler-based GPU which has enabled awesome projects such as computer vision and even prototype self-driving cars. With the new Kepler GPU and Denver CPU pairing, I’m looking forward to seeing how NVIDIA’s latest chip is put to work and the kinds of devices it enables.
Are you excited for the new Tegra K1 SoC with NVIDIA’s first fully custom cores?
Subject: Processors | May 8, 2014 - 12:26 AM | Tim Verry
Tagged: TrustZone, server, seattle, PCI-E 3.0, opteron a1100, opteron, linux, Fedora, ddr4, ARMv8, arm, amd, 64-bit
AMD showed off its first ARM-based “Seattle” processor running on a reference platform motherboard at an event in San Francisco earlier this week. The new chip, which began sampling in March, is slated for general availability in Q4 2014. The “Seattle” processor will be officially labeled the AMD Opteron A1100.
During the press event, AMD demonstrated the Opteron A1100 running on a reference design motherboard (the Seattle Development Platform). The hardware was used to drive a LAMP software stack including an ARM optimized version of Linux based on RHEL, Apache 2.4.6, MySQL 5.5.35, and PHP 5.4.16. The server was then used to host a WordPress blog that included stream-able video.
Of course, the hardware itself is the new and interesting bit and thanks to the event we now have quite a few details to share.
The Opteron A1100 features eight ARM Cortex-A57 cores clocked at 2.0 GHz (or higher). AMD has further packed in an integrated memory controller, TrustZone encryption hardware, and floating point and NEON video acceleration hardware. Like a true SoC, the Opteron A1100 supports 8 lanes of PCI-E 3.0, eight SATA III 6Gbps ports, and two 10GbE network connections.
The Seattle processor has a total of 4MB of L2 cache (each pair of cores shares 1MB of L2) and 8MB L3 cache that all eight cores share. The integrated memory controller supports DDR3 and DDR4 memory in SO-DIMM, unbuffered DIMM, and registered ECC RDIMM forms (only one type per motherboard) enabling the ARM-based platform to be used in a wide range of server environments (enterprise, SMB, and home servers et al).
AMD has stated that the upcoming Opteron A1100 processor delivers between two and four times the performance of the existing Opteron X series (which uses four x86 Jaguar cores clocked at 1.9 GHz). The A1100 has a 25W TDP and is manufactured by Global Foundries. Despite the slight increase in TDP versus the Opteron X series (the Opteron X2150 is a 22W part), AMD claims the increased performance results in notable improvements in compute/watt performance.
AMD has engineered a reference motherboard though partners will also be able to provide customized solutions. The combination of reference motherboard and ARM-based Opteron A1100 is known at the Seattle Development Platform. This reference motherboard features four registered DDR3 DIMM slots for up to 128GB of memory, eight SATA 6Gbps ports, support for standard ATX power supplies, and multiple PCI-E connectors that can be configured to run as a single PCI-E 3.0 x8 slot or two PCI-E 3.0 x4 slots.
The Opteron A1100 is an interesting move from AMD that will target low power servers. the ARM-based server chip has an uphill battle in challenging x86-64 in this space, but the SoC does have several advantages in terms of compute performance per watt and overall cost. AMD has taken the SoC elements (integrated IO, memory, companion processor hardware) of the Opteron X series and its APUs in general, removed the graphics portion, and crammed in as many low power 64-bit ARM cores as possible. This configuration will have advantages over the Opteron X CPU+GPU APU when running applications that use multiple serial threads and can take advantage of large amounts of memory per node (up to 128GB). The A1100 should excel in serving up files and web pages or acting as a caching server where data can be held in memory for fast access.
I am looking forward to the launch as the 64-bit ARM architecture makes its first major inroads into the server market. The benchmarks, and ultimately software stack support, will determine how well it is received and if it ends up being a successful product for AMD, but at the very least it keeps Intel on its toes and offers up an alternative and competitive option.
Subject: Mobile | April 8, 2014 - 07:47 PM | Tim Verry
Tagged: SoC, snapdragon, qualcomm, LTE, ARMv8, adreno, 64-bit
Qualcomm has announced two new flagship 64-bit SoCs with the Snapdragon 808 and Snapdragon 810. The new chips will begin sampling later this year and should start showing up in high end smartphones towards the second half of 2015. The new 800-series parts join the previously announced mid-range Snapdragon 610 and 615 which are also 64-bit ARMv8 parts.
The Snapdragon 810 is Qualcomm's new flagship processor. The chip features four ARM Cortex A57 cores and four Cortex A53 cores in a big.LITTLE configuration, an Adreno 430 GPU, and support for Category 6 LTE (up to 300 Mbps downloads) and LPDDR4 memory. This flagship part uses the 64-bit ARMv8 ISA. The new Adreno 430 GPU integrated in the SoC is reportedly 30% faster than the Adreno 420 GPU in the Snapdragon 805 processor.
In addition to the flagship part, Qualcomm is also releasing the Snapdragon 808 which pairs two Cortex A57 CPU cores and four Cortex A53 CPU cores in a big.LITTLE configuration with an Adreno 418 (approximately 20% faster than the popular Adreno 320) GPU. This chip supports LPDDR3 memory and Qualcomm's new Category 6 LTE modem.
Both the 808 and 810 have Adreno GPUs which support OpenGL ES 3.1. The new chips support a slew of wireless I/O including Categrory 6 LTE, 802.11ac Wi-Fi, Bluetooth 4.1, and NFC.
Qualcomm is reportedly planning to produce these SoCs on a 20nm process. For reference, the mid-range 64-bit Snapdragon 610 and 615 use a 28nm LP manufacturing process. The new 20nm process (presumably from TSMC) should enable improved battery life and clockspeed headroom on the flagship parts. Exactly how big the mentioned gains will be will depend on the specific manufacturing process, with smaller gains from a bulk/planar process shrink or greater improvements coming from more advanced methods such as FD-SOI if the new chip on a 20nm process is the same transistor count as one on a 28nm process (which is being used in existing chips).
The 808 and 810 parts are the new high-end 64-bit chips which will effectively supplant the 32-bit Snapdragon 805 which is a marginal update over the Snapdragon 800. The naming conventions and product lineups are getting a bit crazy here, but suffice it to say that the 808 and 810 are the effective successors to the 800 while the 805 is a stop-gap upgrade while Qualcomm moves to 64-bit ARMv8 and secures manufacturing for the new chips which should be slightly faster CPU-wise, notably faster GPU-wise and more capable with the faster cellular modem support and 64-bit ISA support.
For those wondering, the press release also states that the company is still working on development of its custom 64-bit Krait CPU architecture. However, it does not appear that 64-bit Krait will be ready by the first half of 2015, which is why Qualcomm has opted to use ARM's Cortex A57 and A53 cores in its upcoming flagship 808 and 810 SoCs.
Subject: General Tech, Processors, Mobile | January 21, 2014 - 04:14 AM | Scott Michaud
Tagged: x86, Intel, Android, 64-bit
Given how long it took Intel to release a good 64-bit architecture, dragged ear-first by AMD, it does seem a little odd for them to lead the tablet charge. ARM developers are still focusing on 32-bit architectures and current Windows 8.1 tablets tend to stick with 32-bit because of Connected Standby bugs. Both of these should be cleared up soon.
Also, 64-bit Android tablets should be available this spring based on Bay Trail.
According to Peter Bright of Ars Technica, Android will be first to 64-bit on its x86 build while the ARM variant hovers at 32-bit for a little while longer. It would not surprise me if Intel's software engineers contributed heavily to this development (which is a good thing). I expect NVIDIA to do the same, if necessary, to ensure that Project Denver will launch successfully later this year.
The most interesting part about this is how the PC industry, a symbol of corporate survival of the fittest, typically stomps on siloed competitors but is now facing the ARM industry built on a similar Darwin-based logic. Both embrace openness apart from a few patented instruction sets. Who will win? Well, probably Web Standards, but that is neither here nor there.
Subject: General Tech, Systems | June 4, 2013 - 11:44 PM | Tim Verry
Tagged: computex 2013, computex, X-Gene, mitac, ARMv8, appliedmicro, 7-star, 64-bit
During Computex, MiTAC announced a new high density "7-Star" ARMv8 server. Aimed at the enterprise market, the 7-Star platform is a 4U server that holds up to 18 compute cards. Each compute card contains an eight-core ARMv8-based X-Gene processor from AppliedMicro, two DDR3 DIMM slots, and space for two 2.5"/3.5" internal storage drives (SSD or HDD). The compute cards use a 10G SFP+ and a single Gigabit Ethernet port for networking purposes.
Of course, the interesting bit about the 7-Star is that it is one of the first server to use processors based on ARM's 64-bit ARMv8 architecture. MiTAC worked with ARM and AppliedMicro on the project, and it should be available later this year. It is currently being shown off at the ARM Holdings demo suite in Taipei, Taiwan. I'm intested to see how well these 64-bit ARM servers do, especially with new low power chips from Intel and AMD on the way!
Read more about ARMv8 at PC Perspective.
The full press release is below:
Subject: General Tech | April 22, 2013 - 02:04 PM | Jeremy Hellstrom
Tagged: opteron, history, get off my lawn, amd, 64-bit
AMD64 arrived a decade ago with the launch of the first Opteron processor in April of 2003, back in the days when NVIDIA made motherboards and ATI was a separate company. In those days AMD looked like serious competition for Intel as they were out innovating Intel and competing for Big Blue's niche markets as they were first to cross the GHz line and the first to offer a 64bit architecture on a commercially available platform. At that point Intel actually licensed AMD64, re-branded it as x86-64 and used it on their Xeon processor line, a huge victory for AMD. Unfortunately there was not much in the way of consumer software capable of taking advantage of 64-bit architecture and unfortunately remains so to this day, apart from peoples ability to benefit from the enlarged RAM pool allowed. Take a walk down memory lane at The Inquirer, and remember the good old days when AMD was prospering.
"A DECADE AGO AMD released the first Opteron processor and with it the first 64-bit x86 processor."
Here is some more Tech News from around the web:
- Intel pushing adaptive all-in-one PCs with new components @ DigiTimes
- ASUS PCE-AC66 review: 802.11ac via PCIe @ Hardware.info
- Garmin nuvi 2597LMT Review @ TechReviewSource
- The TR Podcast 132: BioShock, bundles and big SSDs
Subject: Systems | April 19, 2013 - 03:56 AM | Tim Verry
Tagged: servers, project moonshot, microserver, hp, arm, Applied Micro Circuits, 64-bit
A recent press release from AppliedMicro (Applied Micro Circuits Corporation) announced that the company’s X-Gene server on a chip technology would be used in an upcoming HP Project Moonshot server.
An HP Moonshot server (expect the X-Gene version to be at least slightly different).
The X-Gene is a 64-bit ARM SoC that combines ARM processing cores with networking and storage offload engines as well as a high-speed interconnect networking fabric. AppliedMicro designed the chip to provide ARM-powered servers that will reportedly reduce the Total Cost of Ownership of running webservers in a data center by reducing upfront hardware and ongoing electrical costs.
The X-Gene chips that will appear in HP’s Project Moonshot servers feature a SoC with eight AppliedMicro-designed 64-bit ARMv8 cores clocked at 2.4GHz, four ARM Cortex A5 cores for running the Software Defined Network (SDN) controller, and support for storage IO, PCI-E IO, and integrated Ethernet (four 10Gb Ethernet links). The X-Gene chips are located on card-like daughter cards that slot into a carrier board that has networking fabric to connect all the X-Gene cards (and the SoCs on those cards). Currently, servers using X-Gene SoCs require a hardware switch to connect all of the X-Gene cards in a rack. However, the next-generation 28nm X-Gene chips will eliminate the need for a rack-level hardware switch as well as featuring 100Gb networking links).
The X-Gene chips in HP Project Moonshot will use relatively little power compared to Xeon-based solutions. AppliedMicro has stated that eh X-Gene chips will be at least two-times as power efficient, but has not officially release power consumption numbers for the X-Gene chips under load. However, at idle the X-Gene SoCs will use as little as 500mW and 300mW of power at idle and standby (sleep mode) respectively. The 64-bit quad issue, Out of Order Execution chips are some of the most-powerful ARM processors to date, though they will soon be joined by ARM’s own 64-bit design(s). I think the X-Gene chips are intriquing, and I am excited to see how well they fare in the data center environment running server applications. ARM has handily taken over the mobile space, but it is still relatively new in the server world. Even so, the 64-bit ARM chips by AppliedMicro (X-Gene) and others are the first step towards ARM being a viable option for servers.
According to AppliedMicro, HP Project Moonshot servers with X-Gene SoCs will be available later this year. You can find the press blast below.
Subject: General Tech | December 26, 2012 - 04:34 PM | Tim Verry
Tagged: mozilla, firefox, browser, Internet, 64-bit
A month ago Mozilla announced that it would no longer release 64-bit versions of its popular Firefox web browser due to a lack of resources. While the stable versions for Windows were 32-bit, nightly builds were available to enthusiasts that were 64-bit and could take advantage of more than 4GB of memory.
Mozilla developer Benjamin Smedberg stated that there was significant negative feedback from the community over the decision to axe 64-bit nightlies. While Mozilla has reaffirmed that they do not have the resources to support 64-bit builds, the developers are proposing a compromise that they hope will assuage users. In short, the Release Engineering team will continue to build 64-bit versions of the Firefox browser, but Mozilla will consider it a teir 3 build and the support is left up to the community.
Currently, the plan regarding 64-bit versions of Firefox involves a forced migration of existing 64-bit users to 32-bit versions via the automatic browser updates. Then, after the migration date, users that want the 64-bit version will need to go and download it again. Once installed, users will be informed that it is not officially supported software and they are to use it at their own risk. Click-to-play plugins will be enabled in the 64-bit builds while the crash reporter will be disabled. Win64 tests and on-checkin builds of the browser will be discontinued.
Interestingly, all browser testing by Mozilla will be done on the 64-bit edition of Windows 8. Yet they are only testing and supporting 32-bit versions of Firefox. The current situation is less than ideal as the x64 Firefox browsers will not be supported by Mozilla, but at least the software will still be available for those that need it. For now, Waterfox is an option for those that need to install a 64-bit browser based on Firefox.
Does Mozilla’s decision to stop supporting the 64-bit Firefox browser affect you? What do you think of the offered compromise?
Subject: General Tech | November 22, 2012 - 01:03 PM | Jeremy Hellstrom
Tagged: mozilla, firefox, dumb, 64-bit
Once upon a time was a little company called Mozilla who had a browser that knew some tricks no other browser did. After a while the Mozilla foundation decided to split up several projects and the Firefox browser was born, again capable of things that no other browser was doing at the time. The other browsers were quick to pick up on these tricks and to emulate them, but Firefox held onto a respectable share of overall usage which slowly eroded as other browsers came onto the scene to steal away some of that share. Apparently this depressed Firefox as it decided to start on a steady diet of add-ons and stuffing extras in below the belt which eventually caused such bloating as to make those who cared about Firefox suggest it might want to think about slimming down a bit or at least wear something a little larger, maybe a size 64.
Instead, according to various sources such as DailyTech, Firefox has decided to dump all development of a 64-bit version of its browser. IE10 supports 64-bit, Opera supports 64-bit and Chrome does on Linux and is working on a Windows version for the near future, leaving Firefox in the company of Lynx. While the news stories are specific to the Firefox browser, it leaves one suspicious about the Firefox OS which is being developed for mobile devices; just what features are going to be abandoned as too hard to continue developing for.
"Fans of the non-profit Mozilla Foundation have waited... and waited... and waited more still, for Mozilla's popular Firefox browser to add 64-bit support. With pickup of 64-bit SKUs of Microsoft Corp.'s (MSFT) Windows operating system rapidly accelerating, it certainly seemed a 64-bit browser would be just around the corner.
Instead Mozilla has made the curious decision to pull the plug on the long-delayed project, while offering only small clues as to why the decision was made."
Here is some more Tech News from around the web:
- The 3D Printing Wars Begin @ MAKE:Blog
- Samsung brews half-asleep OCTO CORE phone brain MONSTER @ The Register
- Win OCZ RevoDrive 3 PCIe SSD and Kitguru fans! @ Kitguru
Get notified when we go live!