How can you make your Pentium G3258 system cheaper? Run Ubuntu!

Subject: Processors | July 22, 2014 - 04:15 PM |
Tagged: linux, Pentium G3258, ubuntu 14.10

Phoronix tested out the 20th Anniversary Pentium CPU on Ubuntu 14.10 and right off the bat were impressed as they managed a perfectly stable overclock of 4.4GHz on air.  Using Linux 3.16 and Mesa 10.2 they had no issues with the performance of the onboard GPU though the performance lagged behind the fast GPU present on the Haswell chips they tested against.  When they benchmarked the CPU the lack of Advanced Vector Extensions and the fact that it is a dual core CPU showed in the results but when you consider the difference in price for a G3258's compared to a 4770K it fares quite well.  Stay tuned for their next set of benchmarks which will compare the G3258 to AMD's current offerings.

image.php_.jpg

"Up for review today on Phoronix is the Pentium G3258, the new processor Intel put out in celebration of their Pentium brand turning 20 years old. This new Pentium G3258 processor costs under $100 USD and comes unlocked for offering quite a bit overclocking potential while this Pentium CPU can be used by current Intel 8 and 9 Series Chipsets. Here's our first benchmarks of the Intel Pentium G3258 using Ubuntu Linux."

Here are some more Processor articles from around the web:

Processors

Source: Phoronix

HSA on Linux

Subject: General Tech | July 11, 2014 - 02:36 PM |
Tagged: linux, hsa, amd, open source

Open source HSA has arrived for the Linux kernel with a newly released set of patches which will allow Sea Islands and newer GPUs to share hardware resources.   These patches are both for a sample driver for any HSA-compatible hardware and the river for Radeon GPUs.  As the debut of the Linux 3.16 kernel is so close you shouldn't expect to see these patches included until 3.17 which should be released in the not too distant future.  Phoronix and Linux users everywhere give a big shout of thanks to AMD's John Bridgman for his work on this project.

homePageBanner1.png

"AMD has just published a massive patch-set for the Linux kernel that finally implements a HSA (Heterogeneous System Architecture) in open-source. The set of 83 patches implement a Linux HSA driver for Radeon family GPUs and serves too as a sample driver for other HSA-compatible devices. This big driver in part is what well known Phoronix contributor John Bridgman has been working on at AMD."

Here is some more Tech News from around the web:

Tech Talk

Source: Phoronix

Do you know Juno?

Subject: General Tech | July 3, 2014 - 12:39 PM |
Tagged: linux, linaro, juno, google, armv8-a, ARMv8, arm, Android

By now you should have read Ryan's post or listened to Josh talk about Juno on the PCPer Podcast but if you find yourself hungry for more information you can visit The Tech Report.  They discuss how the 64-bit Linaro is already able to take advantage of one of big.LITTLE's power efficiency optimization called Global Task Scheduling.  As Linaro releases monthly updates you can expect to see more features and better implementations as their take on the Android Open Source Project evolves.  Expect to see more of Juno and ARMv8 on review sites as we work out just how to benchmark these devices.

aosp.jpg

"ARM has created its own custom SoC and platform for 64-bit development. The folks at Linaro have used this Juno dev platform to port an early version of Android L to the ARMv8 instruction set. Here's a first look at the Juno hardware and the 64-bit software it enables."

Here is some more Tech News from around the web:

Tech Talk

ARM Ships Juno Development Platform for ARMv8-A Integration

Subject: Mobile | July 2, 2014 - 12:00 PM |
Tagged: linux, linaro, juno, google, armv8-a, ARMv8, arm, android l

Even though Apple has been shipping a 64-bit capable SoC since the release of the A7 part in September of 2013, the Android market has yet to see its first consumer 64-bit SoC release. That is about to change as we progress through the rest of 2014 and ARM is making sure that major software developers have the tools they need to be ready for the architecture shift. That help is will come in the form of the Juno ARM Development Platform (ADP) and 64-bit ready software stack.

Apple's A7 is the first core to implement ARMv8 but companies like Qualcomm, NVIDIA and course ARM have their own cores based on the 64-bit architecture. Much like we saw the with the 64-bit transition in the x86 ecosystem, ARMv8 will improve access to large datasets, will result in gains in performance thanks to increased register sizes, larger virtual address spaces above 4GB and more. ARM also improved performance of NEON (SIMD) and cryptography support while they were in there fixing up the house.

juno4.jpg

The Juno platform is the first 64-bit development platform to come directly from ARM and combines a host of components to create a reference hardware design for integrators and developers to target moving forward. Featuring a test chip built around Cortex-A57 (dual core), Cortex-A53 (quad core) and Mali-T624 (quad core), Juno allows software to target 64-bit development immediately without waiting for other SoC vendors to have product silicon ready. The hardware configuration implements big.LITTLE, OpenGL ES3.0 support, thermal and power management, Secure OS capability and more. In theory, ARM has built a platform that will be very similar to SoCs built by its partners in the coming months.

juno2.jpg

ARM isn't quite talking about the specific availability of the Juno platform, but for the target audience ARM should be able to provide the amount of development platforms necessary. Juno enables software development for 64-bit kernels, drivers, and tools and virtual machine hypervisors but it's not necessarily going to help developers writing generic applications. Think of Juno as the development platform for the low level designers and coders, not those that are migrating Facebook or Flappy Bird to your next smartphone.

The Juno platform helps ARM in a couple of specific ways. From a software perspective, it creates common foundation for the ARMv8 ecosystem and allows developer access to silicon before ARM's partners have prepared their own platforms. ARM claims that Juno is a fairly "neutral" platform so software developers won't feel like they are being funneled in one direction. I'd be curious what ARM's partners actually think about that though with the inclusion of Mali graphics, a product that ARM is definitely trying to promote in a competitive market.

juno1.jpg

Though the primary focus might be software, hardware partners will be able to benefit from Juno. On this board they will find the entire ARMv8 IP portfolio tested up to modern silicon. This should enable hardware vendors to see A57 and A53 working, in action and with the added benefit of a full big.LITTLE implementation. The hope is that this will dramatically accelerate the time to market for future 64-bit ARM designs.

The diagram above shows the full break down of the Juno SoC as well as some of the external connectivity on the board itself. The memory system is built around 8GB of DDR3 running at 12.8 GB/s and the is extensible through the PCI Express slots and the FPGA options. 

linaro.jpg

Of course hardware is only half the story - today Linaro is releasing a 64-bit port of the Android Open Source Project (AOSP) that will run on Juno. That, along with the Linux kernel v3.14 with ARMv8-A support should give developers the tools needed to write the applications, middleware and kernels for future hardware. Also worth noting on June 25th at Google I/O was the announcement of developer access coming for Android L. This build will support ARMv8-A as well.

The switch to 64-bit technology on ARM devices isn't going to happen overnight but ARM and its partners have put together a collective ecosystem that will allow the software and hardware developers to make transition as quick and, most importantly, as painless as possible. With outside pressure pushing on ARM and its low power processor designs, it is taking more of its fate in its own hands, pushing the 64-bit transition forward at an accelerated pace. This helps ARM in the mobile space, the consumer space as well as the enterprise markets, a key market for SoC growth.

Docker is headed for the big time

Subject: General Tech | June 11, 2014 - 12:33 PM |
Tagged: google, virtualization, linux, container, Linux Containerization, docker, Red Hat, ubuntu

Docker has put the libcontainer execution engine of their Linux Containerization onto Github, making it much easier to adopt their alternative virtualization technology and modify it for specific usage scenarios.  So far Google, Red Hat and Parallels have started adding their own improvements to the Go based libcontainer; adding to the Ubuntu dev team already at work. This collaboration should help containerization become a viable alternative to virtual machines and hopefully be included as a feature in future Linux distros.  Read more over at The Register.

fbbb494a7eef5f9278c6967b6072ca3e_400x400.png

"Docker has spun off a key open source component of its Linux Containerization tech, making it possible for Google, Red Hat, Ubuntu, and Parallels to collaborate on its development and make Linux Containerization the successor to traditional hypervisor-based virtualization."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register

Thinking of swapping Linux for Windows on your new Bay Trail NUC?

Subject: Processors | June 5, 2014 - 06:32 PM |
Tagged: baytrail, linux, N2820, ubuntu 14.04, Linux 3.13, Linux 3.15, mesa, nuc

It would seem that installing Linux on your brand new Bay Trail powered NUC will cost you a bit of performance.  The testing Phoronix has performed on Intel NUC DN2820FYKH proves that it can handle running Linux without a hitch, however you will find that your overall graphical performance will dip a bit.  Using MESA 10.3 and both the current 3.13 kernel and the 3.15 development kernel Phoronix saw a small delta in performance between Ubuntu 14.04 and Win 8.1 ... until they hit the OpenGL performance.  As there is still no full OpenGL 4.0+ support there were tests that could not be run and even with the tests that could be there was a very large performance gap.  Do not let this worry you, as they point out in the article there is a dedicated team working on full compliance and you can expect updated results in the near future.

image.php_.jpg

"A few days ago my benchmarking revealed Windows 8.1 is outperforming Ubuntu Linux with the latest Intel open-source graphics drivers on Haswell hardware. I have since conducted tests on the Celeron N2820 NUC, and sadly, the better OpenGL performance is found with Microsoft's operating system."

Here are some more Processor articles from around the web:

Processors

Source: Phoronix

Intel Announces "Cars Are Things" - with New Automotive Platform

Subject: General Tech | May 30, 2014 - 10:21 AM |
Tagged: SoC, linux, internet of things, Intel, automotive, automation, atom

Imagine: You get into the family car and it knows that it’s you, so it adjusts everything just the way you like it. You start driving and your GPS is superimposed over the road in real time from within your windshield, with virtual arrows pointing to your next turn. Kids play games on their touchscreen windows in the back, and everyone travels safely as their cars anticipate accidents...

Sound far-fetched? Work is already being done to make things like these a reality, and Intel has now announced their stake in the future of connected, and eventually autonomous, automobiles.

IMG_20140530_101730.jpg

Intel's new automotive computing platform

Ensuring that every device in our lives is always connected seems like the goal of many companies going forward, and the “Internet of Things” is a very real, and rapidly growing, part of the business world. Intel is no exception, and since cars are things (as I’ve been informed) it makes sense to look in this area as well, right? Well, Intel has announced development of their automotive initiative, with the overall goal to create safer - and eventually autonomous - cars. Doug Davis, Corporate VP, Internet of Things Group at Intel, hosted the online event, which began with a video depicting automotive travel in a fully connected world. It doesn’t seem that far away...

"We are combining our breadth of experience in consumer electronics and enterprise IT with a holistic automotive investment across product development, industry partnerships and groundbreaking research efforts,” Davis said. “Our goal is to fuel the evolution from convenience features available in the car today to enhanced safety features of tomorrow and eventually self-driving capabilities.”

IMG_20140530_114529.jpg

So how exactly does this work? The tangible element of Intel’s vision of connected, computer controlled vehicles begins with the In-Vehicle Solutions Platform which provides Intel silicon to automakers. And as it’s an “integrated solution” Intel points out that this should cut time and expense from the current, more complex methods employed in assembling automotive computer systems. Makes sense, since they are delivering a complete Intel Atom based system platform, powered by the E3800 processor. The OS is Tizen IVI ("automotive grade" Linux). A development kit was also announced, and there are already companies creating systems using this platform, according to Intel.

Source: Intel

Google's containerific alternative to virtualization

Subject: General Tech | May 26, 2014 - 03:48 PM |
Tagged: google, virtualization, linux, container, Linux Containerization

Google creates two billion Linux containers a week which astute readers will realize implies that they can be created much more quickly than a VM.  That is indeed the case, these Linux containers are very similar to Solaris Zones, BSD Jails and other similar ways of sharing parts of an OS across multiple isolated applications as opposed to VMs in which each machine has it's own OS.  Even with prebuilt images it is orders of magnitude slower to create a VM than to simply create a new container.  With the involvement of a startup called Docker, Google has really changed how they handle their systems; read about the impacts at The Register.

googlecontainers.jpg

"That tech is called Linux Containerization, and is the latest in a long line of innovations meant to make it easier to package up applications and sling them around data centers. It's not a new approach – see Solaris Zones, BSD Jails, Parallels, and so on – but Google has managed to popularize it enough that a small cottage industry is forming around it."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register

AMD Shows Off ARM-Based Opteron A1100 Server Processor And Reference Motherboard

Subject: Processors | May 8, 2014 - 12:26 AM |
Tagged: TrustZone, server, seattle, PCI-E 3.0, opteron a1100, opteron, linux, Fedora, ddr4, ARMv8, arm, amd, 64-bit

AMD showed off its first ARM-based “Seattle” processor running on a reference platform motherboard at an event in San Francisco earlier this week. The new chip, which began sampling in March, is slated for general availability in Q4 2014. The “Seattle” processor will be officially labeled the AMD Opteron A1100.

During the press event, AMD demonstrated the Opteron A1100 running on a reference design motherboard (the Seattle Development Platform). The hardware was used to drive a LAMP software stack including an ARM optimized version of Linux based on RHEL, Apache 2.4.6, MySQL 5.5.35, and PHP 5.4.16. The server was then used to host a WordPress blog that included stream-able video.

AMD Seattle Development Platform Opteron A1100.jpg

Of course, the hardware itself is the new and interesting bit and thanks to the event we now have quite a few details to share.

The Opteron A1100 features eight ARM Cortex-A57 cores clocked at 2.0 GHz (or higher). AMD has further packed in an integrated memory controller, TrustZone encryption hardware, and floating point and NEON video acceleration hardware. Like a true SoC, the Opteron A1100 supports 8 lanes of PCI-E 3.0, eight SATA III 6Gbps ports, and two 10GbE network connections.

The Seattle processor has a total of 4MB of L2 cache (each pair of cores shares 1MB of L2) and 8MB L3 cache that all eight cores share. The integrated memory controller supports DDR3 and DDR4 memory in SO-DIMM, unbuffered DIMM, and registered ECC RDIMM forms (only one type per motherboard) enabling the ARM-based platform to be used in a wide range of server environments (enterprise, SMB, and home servers et al).

AMD has stated that the upcoming Opteron A1100 processor delivers between two and four times the performance of the existing Opteron X series (which uses four x86 Jaguar cores clocked at 1.9 GHz). The A1100 has a 25W TDP and is manufactured by Global Foundries. Despite the slight increase in TDP versus the Opteron X series (the Opteron X2150 is a 22W part), AMD claims the increased performance results in notable improvements in compute/watt performance.

AMD Opteron Server Processor.png

AMD has engineered a reference motherboard though partners will also be able to provide customized solutions. The combination of reference motherboard and ARM-based Opteron A1100 is known at the Seattle Development Platform. This reference motherboard features four registered DDR3 DIMM slots for up to 128GB of memory, eight SATA 6Gbps ports, support for standard ATX power supplies, and multiple PCI-E connectors that can be configured to run as a single PCI-E 3.0 x8 slot or two PCI-E 3.0 x4 slots.

The Opteron A1100 is an interesting move from AMD that will target low power servers. the ARM-based server chip has an uphill battle in challenging x86-64 in this space, but the SoC does have several advantages in terms of compute performance per watt and overall cost. AMD has taken the SoC elements (integrated IO, memory, companion processor hardware) of the Opteron X series and its APUs in general, removed the graphics portion, and crammed in as many low power 64-bit ARM cores as possible. This configuration will have advantages over the Opteron X CPU+GPU APU when running applications that use multiple serial threads and can take advantage of large amounts of memory per node (up to 128GB). The A1100 should excel in serving up files and web pages or acting as a caching server where data can be held in memory for fast access.

I am looking forward to the launch as the 64-bit ARM architecture makes its first major inroads into the server market. The benchmarks, and ultimately software stack support, will determine how well it is received and if it ends up being a successful product for AMD, but at the very least it keeps Intel on its toes and offers up an alternative and competitive option.

Source: Tech Report

Another GPU Driver Showdown: AMD vs NVIDIA in Linux

Subject: General Tech, Graphics Cards | April 27, 2014 - 04:22 AM |
Tagged: nvidia, linux, amd

GPU drivers have been a hot and sensitive topic at the site, especially recently, probably spurred on by the announcements of Mantle and DirectX 12. These two announcements admit and illuminate (like a Christmas tree) the limitations of APIs on gaming performance. Both AMD and NVIDIA have their recent successes and failures on their respective fronts. This will not deal with that, though. This is a straight round-up of new GPUs running the latest drivers... in Linux.

7-TuxGpu.png

Again, results are mixed and a bit up for interpretation.

In all, NVIDIA tends to have better performance with its 700-series parts than equivalently-priced R7 or R9 products from AMD, especially in low-performance Source Engine titles such as Team Fortress 2. Sure, even the R7 260X was almost at 120 FPS, but the R9 290 was neck-and-neck with the GeForce GTX 760. The GeForce GTX 770, about $50 cheaper than the R9 290, had a healthy 10% lead over it.

In Unigine Heaven, however, the AMD R9 290 passed the NVIDIA GTX 770 by a small margin, coming right in line with it's aforementioned $50-bigger price tag. In that situation, where performance became non-trivial, AMD caught up (but did not beat). Also, third-party driver support is more embraced by AMD than NVIDIA. On the other hand, NVIDIA's proprietary drivers are demonstrably better, even if you would argue that the specific cases are trivial because of overkill.

And then there's Unvanquished, where AMD's R9 290 did not achieve triple-digit FPS scores despite the $250 GTX 760 getting 110 FPS.

Update: As pointed out in the comments, some games perform significantly better on the $130 R7 260X than the $175 GTX 750 Ti (HL2: Lost Coast, TF2, OpenArena, Unigine Sanctuary). Some other games are the opposite, with the 750 Ti holding a sizable lead over the R7 260X (Unigine Heaven and Unvanquished). Again, Linux performance is a grab bag between vendors.

There's a lot of things to consider, especially if you are getting into Linux gaming. I expect that it will be a hot topic, soon, as it picks up... ... Steam.

Source: Phoronix