Feed aggregator

Costales: Choosing Copy.com as my new file syncing software

Planet Ubuntu - Fri, 04/11/2014 - 14:20
Well, I chose Copy as my new cloud service.
Why? I'd prefer Dropbox but they are giving just 2GB. Copy.com is giving 15GB + 5GB if you install the software. The web is great & the Linux client is useful too (with a quick synchronization, not like Ubuntu One):

systray

Notifications
Proxy or bandwidth settings
But I miss an Ubuntu installer, then you can open a Terminal and enter these commands:
wget https://copy.com/install/linux/Copy.tgz
tar -xvzf Copy.tgz
rm Copy.tgz
cd copy

If your Ubuntu is Ubuntu 32bits:
cd copy/x86_64
If your Ubuntu is Ubuntu 64bits:
cd copy/x86

Then:
sudo ./CopyCmd Overlay install
cd ../..
sudo mv copy /opt
cd /opt/copy 
./CopyAgent

Richard Johnson: Linux People It’s Been Awhile

Planet Ubuntu - Fri, 04/11/2014 - 12:18

Hey Linux People it’s been a while, so I wanted to post a little update about what has gone on with myself personally, with Linux, and more since last year.

Work and Linux

On the work front I am still doing my thing consulting. I have picked up a few more clients and easily doubled the amount of work I was doing a year ago. My 2 big projects are a company downsizing project as well as a Linux-based media solution. The company downsizing project has actually be going on for a couple of years now. I ended up moving an entire company from a huge office space and small data center down to no office (telecommuting) and 4 virtual machines in a private cloud. It was a little tricky shifting from about 20 infrastructure machines down in to 2, but it was really successful. 1 VM is a Windows server running Active Directory with 3 CentOS Linux VMs that provide Samba, DNS, NFS, web sites (internal and external), and more. I help support the network now, which is nothing more than clicking a button to update everything. The other project is a sweet media appliance running on top of Ubuntu Linux. My goal is to get them switched from 10.04 with 14.04, old Python to a newer Python, and moving a lot of their Python code base to a C/C++ code base. What have I learned from this project over the past year? MPlayer can suck, old developers and their spaghetti code need to disappear, old Linux people and their use of the root account need to chill, and the default Ubuntu Linux kernel is to bloated for small appliances (low-latency as well, thank goodness I can build a real-time kernel).

Personal

On a personal level, I haven’t been on my bike enough and I really need to get back on it. I probably spent too much time fishing last year. I rediscovered my love for the outdoors, which I have really missed. I just need to find me a way to get my workspace outside in the woods somewhere. I would probably be way more productive. I was there when the Blackhawks won the Stanley Cup and I will be there again this year. Every home game, on the glass, is where you will find me throughout the playoffs and Stanley Cup. Can’t wait for that to start.

What I’ve learned over the last year
  • Yocto Project is awesome and sucks at the same time. Awesome because if you know what you are doing and can execute, you can create an amazing embedded solution built on Linux. Sucks because it is super easy for people to create real garbage. I have taken apart a couple of embedded solutions that are big in the market, got to their cores, and just shook my head in disbelief. Just goes to show, people will spend a lot of money just to buy junk.
  • Speaking of spending a lot of money to buy junk, I got bitten by the HiFi audio bug a bit, in a headphone kind of way. No, I did not go with the Fashion Accessories by Dre. See where I said spending a lot of money to buy junk? Well I didn’t. I spent wisely and got hooked up as well. Rocking studio-quality headphones by AKG and Sennheiser and I am looking at a DAC and Amp solution by Schiit as well. Imagine, a Schiit Magni and Modi combo with say the AKG K240 Studio headphones, for less than those fashion accessories by Dre. Only time my setup sounds muddy is if I accidentally drop it in the mud, otherwise it is the way music needs to be listened to (now, I am actually listening to Jono Bacon growl while writing this. I didn’t do this on purpose either. Shuffle FTW?
  • Oh, that last one, I learned that the high-end audio market actually likes making sure their products work on Linux. A large percentage of the USB DACs on the market work out of the box with Linux and Mac. Windows needs you to install a driver of course.
  • VA API, it is real, but for some reason nobody wants to add it properly. MPlayer said a year or so ago they need someone to help add it. Still not done, but thankfully last year someone created a MPlayer package with support, and they haven’t updated it in a year either. In 14.04, VA API still sucks, but don’t feel bad, it sucks for others too, like those in Debian Linux, but it seems to work just fine in the RPM-based Linux camps. Yes, I could help fix it, but I am to busy, looking at myself in the mirror.
  • Media network synchronization, why is streaming the only recommended solution? If I have the same video or audio file on 2 different machines in 2 different locations on the same LAN, let me get some perfect audio sync going on easily.
See You Soon

That’s all for now. Just wanted to say hi again and let everyone know I am still alive. Excited for the 14.04 release to drop. That means I get to update a lot of client machines, which equates to money in my pocket. See, you can make money from Open Source. Hopefully this upcoming year I can make some changes to not only this site, but hop in and give back to Ubuntu again. There are packages that I have worked on that need to get into Debian and Ubuntu eventually as well as some patches I have come up with over the past few months working on a Linux appliance.

Linux People It’s Been Awhile is a post from Richard A. Johnson's blog.

José Antonio Rey: ownCloud Charm Updated!

Planet Ubuntu - Fri, 04/11/2014 - 10:09

A couple days ago, Canonical announced that the Ubuntu One File and Music Services were being shut down. I was checking some of the alternatives that were presented for migrating files, and found ownCloud as one of them. I had never used it, but knew a Juju Charm was in the Charm Store. So, as I’m still enjoying the benefits of Amazon EC2′s free tier, I decided to give it a shot and see how it was.

Once the service was deployed, I checked everything was running good, but noticed the version was not the latest. I decided to go ahead and fix it, but in the road, while working with Charles Butler, I found several problems, including a broken upgrade-charm hook, and some bugs on ownCloud’s side. But everything seems to be running good now, and you should be able to do a new ownCloud deployment or upgrade your current one to the newest version without any errors!

Be aware that we are working on several other bugs for this charm, including the version lock we currently have. Make sure to report any other bugs you find on the Launchpad project, and we’ll take a look at it. Now, go and play with ownCloud!


Sam Hewitt: How-to Shape Tortellini

Planet Ubuntu - Fri, 04/11/2014 - 10:00

Tortellini is simply one of many filled pasta shapes, but it involves a bit more work to shape than others (such as ravioli). So if you're into the meticulous, then you'll enjoy these. :)

Instead of placing a filling between two squares of pasta & then sealing (ravioli), tortellini makes use of one square folded onto itself into a triangle which is then pinched into a "navel" shape.

    Things you'll need:
  • fresh pasta dough, rolled into sheets
  • flour, for dusting
  • small pizza wheel or knife –using a pizza wheel makes cutting a lot easier
  • water
    Directions
  1. Flour a clean, dry surface, such as a countertop, table or large cutting board.
  2. Place a small bowl of water within reach.
  3. Dust the pasta sheets with flour & cut into ~3-inch squares.
  4. Scoop approximately 1 teaspoon of your filling into the center of a pasta square.
  5. To seal, dip a finger in the water and then dampen two edges of the pasta square –the water makes the dough slightly gummy so it will stick to itself.
  6. Fold one edge over and starting in the smaller (~45°) corner gently pinch it close.
  7. Next, gently pinch close the other side, starting at the larger (~90°) corner.
  8. If you've had too much filling it will squirt out, but that's alright.
  9. Flip the half-formed tortellini over so the flatter side is facing upwards.
  10. Wet the two opposing corners and fold each towards the center (with overlap). Pinch them together.
  11. Transfer each completed tortellini to a well-floured tray where they can remain like this until you're ready to cook them.
  12. The tortellini can be refrigerated in this state for a few days, or frozen for months.
  13. Upon cooking, drop them into salted, boiling water and when they start to float they will be cooked –they can be still frozen at this point (had you done so).

Alan Bell: OpenERP and Heartbleed

Planet Ubuntu - Fri, 04/11/2014 - 07:39

No doubt by now you will have seen loads of stuff in the media about the Heartbleed bug. This is a pretty bad bug, there have been other huge bugs in the past too, but this one has a very media friendly name and a cute logo so it gets the coverage that it deserves. In short it affects https connections to web servers and other types of server that use ssl in a less obvious way. We have been updating and fixing servers that we host but we know that rather a lot of people have been using our guides to installing OpenERP, if you have, and you set up the https connections to the server (part 2 of the guides), then you are probably vulnerable to the heartbleed bug. OpenERP itself does not do the https bit, we used either Apache or Nginx as a reverse proxy to add the ssl layer.

Firstly use this testing tool http://filippo.io/Heartbleed to see if your system is vulnerable. You may need to check the box to ignore certificates if you are using a self-signed certificate. The fix to OpenSSL is already in the Ubuntu repositories, so you just need to pull the upgrade (this will update all packages, which is fine)

sudo apt-get update
sudo apt-get dist-upgrade

and then restart your webserver service, which could be apache or nginx, if you can’t remember which then just try both, one will fail with an unrecognised service error.

sudo service nginx restart
sudo service apache2 restart

This might get you up and running in seconds, but I found one one machine the openerp process had got a bit upset, if you can’t log in after restarting the web process then you could restart the openerp server process, or just restart everything with:

sudo reboot

Now use http://filippo.io/Heartbleed again to confirm that you are fixed.

If you are not using https you might be fine, you have an inherently less secure connection to your server, but the server won’t serve up it’s memory to anyone who asks for it. Even if you are not using https right now, do update anyway, it is a good thing to do.

Zygmunt Krynicki: Checkbox challenges for 2015

Planet Ubuntu - Fri, 04/11/2014 - 05:29
Having a less packed day for the first time in a few weeks I was thinking about the next steps for the Checkbox project. There are a few separate big tasks that I think should happen over the next 6-18 months.

First of all, our large collection of tests needs maintenance. We need to keep adapting it to changing requirements and new hardware. We need to fix bugs and make it more robust. We also need to add some level of polish to the user interface. To make sure all our test programs are behaving in an uniform way, use correct wording, can be localized, etc. Those are all important to keep the project healthy. We also have a big challenge ahead of us, with the whole touch world entering the Ubuntu ecosystem. We will have to revisit some decisions, decide which libraries, tools and layers to use to test certain features and make sure we don't leave anything behind. This is very challenging as we really have a lot of existing tests. We also need to make them work the same way regardless of how they are started (classic Ubuntu, touch Ubuntu, remote Ubuntu server).

The core tools got an amazing boost over the past 12 months. Starting from pretty old technology that was very flexible but hard to understand and modify to something that is probably just as flexible but far easier to understand and work with. Still, it's not all roses. The Ubuntu SDK UI needs a lot of work to get right. It has usability issues, it has architecture design issues. We also have a big disconnect between the core technology (python3) used by and Qt+QML C++ codebase, talking over D-Bus with the rest of the stack. That brings friction and is 10x harder to modify than an all-python solution. Ideally we'd like to switch to PyQt but how that fares with the future Touch world is hard to say. I suspect that our remote testing story will help us have a smooth transition that won't compromise our existing effort and equally won't collide with the direction set by the first Ubuntu touch release.

Perhaps not in the spotlight but definitely we need to work on "whitelists" (aka test plans). We need to learn how our users take our stack and remix it to solve their problems. Our test plan technology is ancient and shows its weaknesses. We need a 2.0 test plans that allow us to express the problems we need to solve clearly, unambiguously and efficiently. We need to improve our per-device-instance test support. We need to provide rich meta-data for user interfaces. We need better vocabulary to create true test plans that can react to results in a way unconstrained by the design of the legacy checkbox first written over seven years ago. We also need to execute those changes in a way that has no flag days or burnt bridges. Nobody likes to build on moving sand and we're here to provide a solid foundation for other teams at Canonical and everyone in the free software ecosystem.

Lastly we have the elephant in the room called deployment. Checkbox doesn't by itself handle deploying system images and configuration onto bare metal (we have a very old and support project for doing that) and the metal is changing very rapidly. Severs are quite unlike desktops, laptops (Ethernet-less ultrabooks?) and most importantly tablets and the whole touch-device ecosystem behind them. In the next 12 months we need a very good story and a solid plan on how to execute the transition from what we have now onto something that keeps us going for the next few years, at least. Canonical luckily has such a project already, MAAS. MAAS was envisioned for big iron hardware but if you look at it from our point of view we really want to have uniform API for all hardware. From that big-ass server in a Data Centre somewhere across the globe to that development board on your desk, which will be the next tablet or phone product. We want to do the same set of operations on all of the devices in this spectrum, manage, control, track, re-image. The means and technology to do that differ widely and from experience I can tell you this is a zoo with all the queer animals you can think of but I'm confident we can make it work.

So there you have it. Checkbox over the next 12+ months, as seen through my eyes.

Ubuntu GNOME: Ubuntu GNOME Trusty Tahr Release Candidate

Planet Ubuntu - Fri, 04/11/2014 - 03:56

Hi,

This is the final week of Trusty Tahr Cycle. We are now at the very last phase of this cycle. It is called The Final Freeze and Release Candidate.

The Final Freeze vs The Final Release
You need to understand the difference between The Final Release and The Final Freeze.

Final Freeze – April 10th

Final Release – April 17th

Adam Conrad from The Ubuntu Release Team has explained in details in his email and announced The Final Freeze of Trusty Tahr Cycle.

What does all this mean?
It means that Ubuntu GNOME Trusty Tahr Daily Builds are considered to be RC.

What does RC (Release Candidate) mean?

Release Candidate

“During the week leading up to the final release, the images produced are all considered release candidates.”

The Final Round of Testing Ubuntu GNOME Trusty Tahr
This is the final round and the last week to test Ubuntu GNOME Trusty Tahr.

Ubuntu GNOME QA Team is testing now Ubuntu GNOME Trusty Tahr Release Candidate.

As always, your help, support and testing are highly needed and greatly appreciated.

All about Testing Ubuntu GNOME.

Download and Test Ubuntu GNOME Trusty Tahr Release Candidate.

Feel free to Contact Us.

Thank you for choosing, testing and supporting Ubuntu GNOME. Without your great and amazing support, we would have never reached to this point.

Stephan Adig: Network Engineers, we are looking for you!

Planet Ubuntu - Fri, 04/11/2014 - 01:24

So, we have a Datacenter Engineer Position open, and also a Network Engineer Position.

And as pre-requisite, you should be able to travel through Europe without any issues, you should read/write/speak English, next to your native language.

When you

  • are comfortable to travel
  • are familiar with routers and switches of different vendors
  • know that bonding slaves don’t need a safe word
  • know that BGP is no medical condition
  • know how to crimp CAT 5/6/7
  • know the differences between the different types of LWL cable connections
  • have fun working with the smartest guys in this business
  • want to even learn something new
  • love games
  • love streaming
  • love PlayStation (well, this is not a must)

Still with me?

You will work out of our Berlin Office, which is in the Heart of Berlin.

You will work directly with our Southern California Based Network Engineering Team, with our Datacenter Team and with our SRE Team.

The Berlin team is a team of several nationalities, which combines the awesomeness of Spanish, Italian, French and German Minds. We all love good food and drinks, good jokes, awesome movies, and we all love to work in the hottest datacenter environments ever.

Is this something for you?

If so, you should apply now.

And applying for this job is easy as provision a Cisco Nexus router today.

Two ways:

  1. You point your browser to our LinkedIn Page and press ‘Apply Now. (Please refer to me, and where you read this post)
  2. Or you send your CV directly through the usual channels to me (PDF or ASCII with a Profile Picture attached) and I put you on top of the stack.

Hope to see you soon and welcome you as part of our Sony/Gaikai Family in Berlin

I know some people are afraid of LinkedIn so here is the official job description from our HR Department.

Job Description:

As a Network Engineer with deployment focus you will be responsible for rollout logistics, network deployment process and execution. You will work closely with remote Network Engineers and Datacenter Operations to turn up, configure, test and deliver Network platforms across POPs and Datacenters.

Principle Duties / Responsibilities:
  • Responsible for rollout logistics and coordination
  • Responsible for network deployment processes
  • Responsible for network deployment execution
  • Deployment and provisioning of Transport, Routing and Switching platforms
Required Knowledge / Skills:
  • Comfortable with travel
  • Comfortable with optical transport, DWDM
  • Comfortable with various network operating systems
  • Comfortable with some network testing equipment
  • Comfortable with structured cabling
  • Comfortable with interface and chassis diagnostics
  • Comfortable with basic power estimation and calculation
Desired Skills and Experience Requirements:
  • BA degree or equivalent experience
  • 1-3 years working in a production datacenter environment
  • Experience with asset management and reporting
  • Knowledge of various vendor RMA processes to deal with repairs and returns

  • Keen understanding of data center operations, maintenance and technical requirements including replacement of components such as hard drives, RAM, CPUs, motherboards and power supplies.

  • Understanding of the importance of Change Management in an online production environment
  • High energy and an intense desire to positively impact the business
  • Ability to rack equipment up to 50 lbs unassisted

  • High aptitude for technology

  • Highly refined organizational skills
  • Strong interpersonal and communication skills and the ability to work collaboratively
  • Ability to manage multiple tasks at one time

Up to 50% travel required with this position.

Javier L.: UGJ-MX, 5-April-2014

Planet Ubuntu - Thu, 04/10/2014 - 12:33

Last weekend the Ubuntu-mx team hosted their fourth UGJ in Mexico City!, Isn’t wonderful when you meet mind liked people and everything just flows?, we discussed in detail Free Software, Ubuntu, the Ubuntu MX team and our favorite quesadillas recipies (I love the ones with chorizo and cheese). We took a bunch of photos and video for those who couldn’t attend =(

Anyway, thanks for attending and we’ll see you in the next one! Have fun =D


Ubuntu Podcast from the UK LoCo: S07E02 – The One Where Everybody Finds Out

Planet Ubuntu - Thu, 04/10/2014 - 06:27

We’re back with the second episode of Season Seven of the Ubuntu Podcast! Alan Pope, Mark Johnson, Tony Whitmore, and Laura Cowen are drinking tea and eating early Easter cakes in Studio L.

 Download OGG  Download MP3 Play in Popup

In this week’s show:

We’ll be back next week, so please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow our twitter feed http://twitter.com/uupc
Find our Facebook Fan Page
Follow us on Google Plus

Kubuntu Wire: Install Kubuntu on Windows XP Systems

Planet Ubuntu - Thu, 04/10/2014 - 06:27

KDE friendly web magazine Muktware has posted an article to Install Kubuntu on Windows XP systems for the millions of Windows XP machines which are now out of support.  With SSL breaking making the national news, you really can’t afford to be out of support.

 

Ubuntu App Developer Blog: Ubuntu App Showdown – some more time to tidy things up

Planet Ubuntu - Thu, 04/10/2014 - 03:05

Shortly before the submission deadline last night we had some small technical hiccups in the Ubuntu Software Store. This was fixed resolved very quickly (thanks a lot everyone who worked on this!), but we decided to give everyone another day to make up for it.

The new deadline is today, 10th April 2014, 23:59 UTC.

Please all verify that your app still works, everythings is tidy, you submitted it to the store and filled out the submission form correctly. Here’s how.

Submit your app

This is obviously the most important bit and needs to happen first. Don’t leave this to the last minute. Your app might have to go through a couple of reviews before it’s accepted in the store. So plan in some time for that. Once it’s accepted and published in the store, you can always, much more quickly, publish an update.

Submit your app.

Register your participation

Once your app is in the store, you need to register your participation in the App Showdown. To make sure your application is registered for the contest and judges review it, you’ll need to fill in the participation form. You can start filling it in already and until the submission deadline, it should only take you 2 minutes to complete.

Fill out the submission form.

Questions?

If you have questions or need help, reach out (also rather sooner than later) to our great community of Ubuntu App Developers.

Ubuntu Server blog: OpenStack Continuous Integration on Ubuntu 101

Planet Ubuntu - Wed, 04/09/2014 - 17:39

We (the Canonical OIL dev team) are about to finish the production roll out of our OpenStack Interoperability Lab (OIL). It’s been an awesome time getting here so I thought I would take the opportunity to get everyone familiar, at a high level, with what OIL is and some of the cool technology behind it.

So what is OIL?

For starters, OIL is essentially continuous integration of the entire stack, from hardware preparation, to Operating System deployment, to orchestration of OpenStack and third party software, all while running specific tests at each point in the process. All test results and CI artifacts are centrally stored for analysis and monthly report generation.

Typically, setting up a cloud (particularly OpenStack) for the first time can be frustrating and time consuming. The potential combinations and permutations of hardware/software components and configurations can quickly become mind-numbing. To help ease the process and provide stability across options we sought to develop an interoperability test lab to vet as much of the ecosystem as possible.

To accomplish this we developed a CI process for building and tearing down entire OpenStack deployments in order to validate every step in the process and to make sure it is repeatable. The OIL lab is comprised of a pool of machines (including routers/switches, storage systems, and computer servers) from a large number of partners. We continually pull available nodes from the pool, setup the entire stack, go to town testing, and then tear it all back down again. We do this so many times that we are already deploying around 50 clouds a day and expect to scale this by a factor of 3-4 with our production roll-out. Generally, each cloud is composed of about 5-7 machines each but we have the ability to scale each test as well.

But that’s not all, in addition to testing we also do bug triage, defect analysis and work both internally and with our partners on fixing as many things as we can. All to ensure that deploying OpenStack on Ubuntu is as seamless a process as possible for both users and vendors alike.

Underlying Technology

We didn’t want to reinvent the wheel so, we are leveraging the latest Ubuntu technologies as well as some standard tools to do all of this. In fact the majority of the OIL infrastructure is public code you can get and start playing with right away!

Here is a small list of what we are using for all this CI goodness:

  • MaaS — to do the base OS install
  • Juju — for all the complicated OpenStack setup steps — and linking them together
  • Tempest — the standard test suite that pokes and prods OpenStack to ensure everything is working
  • Machine selections & random config generation code — to make sure we get a good hardware/software cross sections
  • Jenkins — gluing everything together

Using all of this we are able to manage our hardware effectively, and with a similar setup you can easily too. This is just a high-level overview so we will have to leave the in-depth technological discussions for another time.

More to come

We plan on having a few more blog posts cover some of the more interesting aspects (both results we are getting from OIL and some underlying technological discussions).

We are getting very close to OIL’s official debut and are excited to start publishing some really insightful data.

Ben Howard: Updated 12.04.4 LTS Cloud Images in response to Heartbleed OpenSSL bug

Planet Ubuntu - Wed, 04/09/2014 - 13:04
Many of our Cloud Image users have inquired about the availability of updated Ubuntu Cloud Images in response to the Heartbleed OpenSSL Vulnerability [1]. Ubuntu released update Ubuntu packages for OpenSSL yesterday, 08 April 2014 [2]. Due to the exceptional circumstances and severity of the Heartbleed OpenSSL bug, Canonical has released new 12.04.4 LTS images at [3]. In the coming days, new Cloud Images for Ubuntu 12.10 and Ubuntu 13.10 will be released.

Canonical is working with Amazon to get the Quickstart and the AWS Marketplace links updated. In the meantime, you can find new AMI ID's for 12.04.4 LTS at [3] and [4]. Also, the snapshot's for Amazon have the volume-create permission granted on the latest images.

Windows Azure [5], Joyent [6] and HP [7, 8, 9] all have updated 12.04.4 LTS images in their respective galleries.

If you are running an affected version of OpenSSL on 12.04 LTS, 12.10 or 13.10, you are strongly encouraged to update. For new instances, it is recommended to either use an image with a serial newer than 20140408, or update your OpenSSL  package immediately upon launch. Finally, if you need documentation on enabling unattended upgrades, please see [10].


[1] https://www.openssl.org/news/secadv_20140407.txt
[2] http://www.ubuntu.com/usn/usn-2165-1/
[3] http://cloud-images.ubuntu.com/releases/precise/release-20140408/
[4] http://cloud-images.ubuntu.com/locator/ec2/
[5] Azure: Ubuntu-12_04_4-LTS-amd64-server-20140408-en-us-30GB
[6] Joyent Image "ubuntu-certified-12.04", fe5aa6c0-0f09-4b1f-9bad-83e453bb74f3
[7] HP US-West-1: 27be722e-d2d0-44f0-bebe-471c4af76039
[8] HP US-East-1: 8672f4c6-e33d-46f5-b6d8-ebbeba12fa02
[9] Waiting on HP for replication to legacy regions az-{1,2,3}
[10] https://help.ubuntu.com/community/AutomaticSecurityUpdates

Dustin Kirkland: Ubuntu 14.04 LTS -- Security for Human Beings

Planet Ubuntu - Wed, 04/09/2014 - 07:00


In about an hour, I have the distinct honor to address a room full of federal sector security researchers and scientists at the US Department of Energy's Oak Ridge National Labs, within the Cyber and Information Security Research Conference.

I'm delighted to share with you the slide deck I have prepared for this presentation.  You can download a PDF here.

To a great extent, I have simply reformatted the excellent Ubuntu Security Features wiki page our esteemed Ubuntu Security Team maintains, into a format by which I can deliver as a presentation.

Hopefully you'll learn something!  I certainly did, as I researched and built this presentation ;-)
On a related security note, it's probably worth mentioning that Canonical's IS team have updated all SSL services with patched OpenSSL from the Ubuntu security archive, and have restarted all relevant services (using Landscape, for the win), against the Heartbleed vulnerability.


Stay safe,
Dustin

Julian Andres Klode: ThinkPad X230 UEFI broken by setting a setting

Planet Ubuntu - Wed, 04/09/2014 - 05:57

Today, I decided to set my X230 back to UEFI-only boot, after having changed that for a bios upgrade recently (to fix a resume bug). I then choose to save the settings and received several error messages telling me that the system ran out of resources (probably storage space for UEFI variables).

I rebooted my machine, and saw no logo appearing. Just something like an underscore on a text console. The system appears to boot normally otherwise, and once the i915 module is loaded (and we’re switching away from UEFI’s Graphical Output Protocol [GOP]) the screen works correctly.

So it seems the GOP broke.

What should I do next?


Filed under: General

Stephan Adig: We are hiring

Planet Ubuntu - Wed, 04/09/2014 - 01:03

Normally I don’t write this type of post, but I know what’s coming up here, and we need people.

As long as you have a European Passport and/or a Visa which entitles you to travel across Europe without issues, you are already interesting.

You are even more interesting when

  • you like working in a fast paced environment
  • you like working with Hardware
  • you are not afraid of moving several hundreds of racks (yes, racks, not servers) of baremetal
  • you like working in an environment where OpenSource is one of the main drivers
  • you like working with a the smartest people in our business
  • you like automation
  • you like being in a Datacenter
  • you like gaming
  • you like streaming
  • you like traveling
  • you read/write/speak English (technically and socially)
  • you like Sony PlayStation (oh well, that’s a plus but not a must ;))
  • you are not afraid

If most of this applies to you, we want to hear from you.

You’ll work from Berlin, Germanies Capital. Our office is in the Heart of Berlin, one of the nicest places in this City.

We are a team of French, Italian, Spanish and German People.

You’ll work closely with the US Southern California Based team and as well with the EU SRE Team.

If you think you are the right person, what are you waiting for?

Applying for this job is easy as installing Ubuntu.

Two ways to apply:

  1. You apply for the job on our LinkedIn Page and refer to Me (Stephan Adig) (you can also mention where you read this post)
  2. Or you send me an email with all your details and your CV (PDF or ASCII and Picture Attached) and I’ll put you in top of the stack.

Anyways, I know some people are scared of LinkedIn so here is the official job description from our HR Department:

Data Center Operations Engineer Job description

Gaikai (外海?, lit. “open sea”, i.e. an expansive outdoor space) is a company which provides technology for the streaming of high-end video games.[2] Founded in 2008, it was acquired by Sony Computer Entertainment in 2012. Its technology has multiple applications, including in-home streaming over a local wired or wireless network (as in Remote Play between the PlayStation 4 and PlayStation Vita), as well as cloud-based gaming where video games are rendered on remote servers and delivered to end users via internet streaming (such as the PlayStation Now game streaming service.[3]) As a startup, before its acquisition by Sony, the company announced many partners using the technology from 2010 through 2012 including game publishers, web portals, retailers and consumer electronics manufacturers

Gaikai is looking for a talented Data Center Operations Engineer to be based in our Berlin office. This position is for an experienced candidate who will work within the Data Center Operations team and have hands on responsibility for ensuring our production datacenter environments are operating efficiently. This position will work closely with the System Engineering and Network Operations teams and provide hands on support for them. The primary responsibility of this job role is to rack and cable new hardware, upgrade existing servers and network equipment and keep accurate inventory information for all systems. You will also be responsible for assisting in the development of processes and procedures related hardware deployment, upgrades and break/fix issues. Key Responsibilities:

  • Support existing hardware in multiple datacenter locations
  • Plan and execute installations in multiple datacenter locations in a timely manner
  • Ensure accurate inventory information for multiple datacenter locations
  • Work closely with Data Center Operations team to track orders and deliveries to multiple datacenter locations
  • Work with the Director of Data Center Operations on datacenter status reports for Senior Management for each datacenter location
  • Refine and document support process for each location including the handling of RMA requests
Desired Skills and Experience

Requirements:

  • BA degree or equivalent experience
  • 1-3 years working in a production datacenter environment
  • Experience with asset management and reporting
  • Knowledge of various vendor RMA processes to deal with repairs and returns
  • Keen understanding of data center operations, maintenance and technical requirements including replacement of components such as hard drives, RAM, CPUs, motherboards and power supplies.
  • Understanding of the importance of Change Management in an online production environment
  • High energy and an intense desire to positively impact the business
  • Ability to rack equipment up to 50 lbs unassisted
  • High aptitude for technology
  • Highly refined organizational skills
  • Strong interpersonal and communication skills and the ability to work collaboratively
  • Ability to manage multiple tasks at one time

Up to 50% travel required with this position.

Ubuntu GNOME: Upgrade Testing

Planet Ubuntu - Wed, 04/09/2014 - 00:44

Hi,

Ubuntu GNOME as an official flavour of Ubuntu, it has the same Release Schedule of Ubuntu and the same goes for all the other official flavours as well.

When it comes to Testing Ubuntu GNOME, we need to make sure everything is working as expected without any problem.

That said, we would like to invite you to help Ubuntu GNOME with Upgrade Testing.

How to help Ubuntu GNOME with Upgrade Testing?
The idea is very simple. We need to upgrade Ubuntu GNOME 13.10 to Ubuntu GNOME Trusty Tahr and test the upgrade process.

If you have Ubuntu GNOME 13.10 installed already, we would really appreciate your help in this regard.

If Ubuntu GNOME 13.10 is not installed, then kindly install it and do the upgrade. Installing Ubuntu GNOME from LiveUSB should not take more than 10 minutes.

How to do an upgrade from 13.10 to Trusty Tahr?
Before we get into this, kindly have a read at Upgrades Documentation.

Whether you’re helping Ubuntu GNOME Team with Testing or you’re a fan of running unstable releases on your machine, kindly make sure to backup your important files before anything else.

To upgrade Ubuntu GNOME 13.10 Stable to Ubuntu GNOME Trusty Tahr Development Release, kindly have a read at Upgrading to Development Releases.

Share your Testing Results
Please make sure to share your Testing Results with Ubuntu GNOME QA Team. The more feedback in this regard, the better.

Let’s make sure that our very first LTS Release of Ubuntu GNOME is solid as rock.

Thank you for helping, supporting and testing Ubuntu GNOME!

As always, for more information about testing, please see Ubuntu GNOME Testing Wiki Page.

Should you have any question, please don’t hesitate to Contact Us.

Happy Testing

Daniel Pocock: Double whammy for CACert.org users

Planet Ubuntu - Tue, 04/08/2014 - 22:47

If you are using OpenSSL (or ever did use it with any of your current keypairs in the last 3-4 years), you are probably in a rush to upgrade all your systems and replace all your private keys right now.

If your certificate authority is CACert.org then there is an extra surprise in store for you. CACert.org has changed their hash to SHA-512 recently and some client/server connections silently fail to authenticate with this hash. Any replacement certificates you obtain from CACert.org today are likely to be signed using the new hash. Amongst other things, if you use CACert.org as the CA for a distributed LDAP authentication system, you will find users unable to log in until you upgrade all SSL client code or change all clients to trust an alternative root.

Costales: #startubuntu

Planet Ubuntu - Tue, 04/08/2014 - 10:46
Discover a new space for your computer
Today is the day! Choose usability, beauty, speed, freedom, community!
Art work by Rafael Laguna.

Pages

Subscribe to Ubuntu Arizona LoCo Team aggregator