RSS

Yii2: Second Generation Yii

0 Comments | This entry was posted on Jul 01 2014

I found enough time recently to finally look into Yii2. I decided that a good test project would be to build a crypto currency exchange tracker. It would download latest prices of all markets from both Cryptsy and Mintpal and then display the data in charts so I could quickly scan trends of all currencies. Yii2 and it’s dependencies can be installed and managed through Composer which I enjoy. It prevents you from needing to keep any third party packages in version control and makes installs, upgrades and deployments easier. The Yii2 documentation is again great and the community is already solid. Any questions I had that were Yii2 specific were answered on the forum in good time. Some things are different and migrating projects from Yii 1.x to 2.x will take a lot of work. Yii2 uses namespaces and this means namespaces need to be declared at the top of views and other files that wasn’t previously necessary. Getting instances of records is slightly different and was changed several times during Yii2′s evolution. However this is stable now. Many things are still the same. Migrations, scaffolding, commands and nearly everything else is the same. In my opinion, Yii2 is still the best PHP framework and I can’t wait to start a production project with it. Yii2 is still beta but the code base has mostly settled with only bug fixes remaining. My next task is to incorporate AngularJS into my Yii projects.

Spreading the Word on Vagrant and Ansible

1 Comment | This entry was posted on May 31 2014

Over the last two months I have presented the advantages of using Vagrant and Ansible to the PHP melbourne and Melbourne Linux user groups and on both occasions it was well received. I demonstrated how development environments can be automated for teams to ensure that everyone is running the same software and at the same versions.

Getting development environments up and running for your current project can very time consuming on some occasions hard to debug when things are behaving strangely. If you have several developers running environments of Windows, Mac and Linux, getting each developer’s rig set to start work can be unnecessarily difficult. With Vagrant and Ansible, one person can easily script the configuration, allowing others to just run it to get the environment setup.

When talking with the Linux group I focused the talk about Ansible more on deploying to production servers (web, mail, etc..) which has no real need for Vagrant. However Vagrant is helpful here also because it allows you to test your Ansible scripts locally before deploying to production systems, saving time and money.

Just a 20 minute presentation is enough to give some examples and a live demonstration to show how easy it is to implement and why they should look at using these technologies in their own work.

My slides are available here and the working script is available on Github.

Setting Up Development Environments With Vagrant and Ansible

0 Comments | This entry was posted on Feb 19 2014

One of the reasons I love running Linux on my main laptop/workstations is that I have an ideal environment to develop web projects. However there’s been many developments in software that moves away from this model which I have grown to love, and that is running your dev environments in virtual machines.

Instead of running Apache (or Nginx), MySQL and PHP natively on my dev machine, I have found it’s now easier to setup and run dev environments in virtual machines that are configured specifically for a given project, which can be automated through server management scripts. Initially this sounds like additional work, and it is but it has several advantages:

  • Custom environments for each project
  • Easily deployable for other developers in your team
  • No knowledge required for other team members.
  • Scripts can be reused for staging and development environments.

What are Vagrant and Ansible:

Vagrant is software that allows you to easily build reproducible development environments for various operating systems. It runs on top of other virtual machine platforms such as Virtualbox but, among other things, creates a sync drive that is accessible to your local file system, allowing you to use you IDE as you would normally without the need to transfer files to the machine.

Ansible, like Puppet or Chef is a server management scripting language. However the learning curve is a lot simpler and doesn’t require any software running on the remote servers. It configures the hosts over ssh.

By combining Vagrant with Ansible, it’s very easy to create development environments for developers who are running any common operating system within minutes without having to manually configure their dev environments to suit their operating system.

I have created Vagrant/Ansible setup script which can be found on Github. This will configure a development virtual machine that will have installed the latest versions of Nginx, MariaDB and PHP on Debian 7.

I think it’s worthwhile for any development teams to investigate using virtual machines like this, especially where complex environments are required.

NGINX config for CakePHP 1.3 (& PHP 5.4)

0 Comments | This entry was posted on May 05 2013

This afternoon I setup a virtual host in NGINX for a CakePHP 1.3.x project in readiness for starting work with a new client tomorrow. However once I had what looked correct, CakePHP would complain that friendly URLs where not setup correctly. I am running PHP 5.4.14 on my laptop and CakePHP 1.3 for the site, as this is what the current project is running.

There seems to be no examples on the web of how to get these two versions to run together. So here is my example that I got to work for anyone who’s also stuck:

server {
    listen 80;
    server_name cakephp;
    root /var/www/cakephp/app/webroot/;

    access_log /var/log/nginx/cakephp/access.log;
    error_log /var/log/nginx/cakephp/error.log;

    location / {
        index index.php index.html index.htm;

        if (-f $request_filename) {
            break;
        }

        if (-d $request_filename) {
            break;
        }

        rewrite ^(.+)$ /index.php?url=$1 last;
    }

    location ~ .*\.php[345]?$ {
        fastcgi_pass unix:/var/run/php5-fpm.sock;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME
        /var/www/cakephp/app/webroot$fastcgi_script_name;

        include fastcgi_params;
    }
}

Preparing for Massive Load with OpenStack

2 Comments | This entry was posted on Feb 25 2013

In November last year I again updated and hosted the website for the NAB Australian Football League Draft Tracker and flew up to the Gold Coast to sit in on the event to ensure it all went well. The website (http://drafttracker.afl.com.au/) was built as a live tracker so the public can watch the picks as they happen.

It was designed to be lightweight to both server and browser so that any client requests only pulled in all site assets on the initial page load and then tiny JSON files every ten seconds looking for updates [1].  Adding drafted players as they happened by the admin updated records in the database via PHP which would create static files (JSON) for clients to pull down to update the page. NGINX was used as the webserver. All this allowed the server to run with minimal effort during the busy night.

However the trade period weeks earlier showed that the online interest in the AFL had lifted significantly and that I should prepare further to ensure that load was not a problem. As I host on Rackspace with their OpenStack cloud hosting I was able to take advantage of their versatile system to easily build a solution to meet potential demand. I created an image of the server and made four new server instances from it which were to become slaves. I then modified the site so that updates on the master would copy (via rsync) any changed JSON files to the slaves. Then I created a load balancer within the same interface with a few clicks and added the four slaves behind it before finally pointing the domain name to the load balancer’s IP address.  Another benefit of this design was that the site administrator could make updates to the site from an instance that was experiencing no load and therefore unhindered by too much traffic.

The draft went for about 1.5 hours and saw 100,000 unique visitors, each of which would poll the site every ten seconds. Monitoring the servers showed that my solution was complete overkill and probably only the one server was enough. But it’s better to plan for the worst and it was a great experience putting the solution together.

COSTING

Each of the four slaves ran with 512MB memory which costs $0.03 per hour, $0.15 total (including master). The load balancer costs a base of $0.015 per hour but scaled up per number of connections. Therefore for the 1.5 hours the expense of the set-up would have cost just a few dollars. Of course I had this set-up running for quite a few days beforehand though but overall the costing is negligible.

[1] The site was designed before the days of NodeJS and websockets not solution for older browsers.

Upgrade to PHP 5.4 with Dotdeb

0 Comments | This entry was posted on Nov 08 2012

I have been using Dotdeb, the custom Debian package repository for the last 15 months to keep all web packages up to the latest version. It’s incredibly easy to install and beats waiting for the Debian team to update their versions. However there was an issue when upgrading PHP from 5.3.x to 5.4.x for systems using the PHP5-fpm package under Nginx.

After returning to the problem after a couple of months I found that when upgrading to 5.4 a major config option was being changed. The listening parameter changed in /etc/php5/fpm/pool.d/www.conf from:

listen = 127.0.0.1:9000;

to:

listen = /var/run/php5-fpm.sock;

 

This was causing an error about an invalid gateway. Once I discovered this change, I found that correcting it is a simple change in the virtual host file, from:

fastcgi_pass   127.0.0.1:9000;

to:

fastcgi_pass unix:/var/run/php5-fpm.sock;

 

Finally restarting Nginx resolved the issue leaving you with latest version of PHP 5.4 running on your server.

Gillette AFL Trade Tracker

0 Comments | This entry was posted on Oct 16 2012

My most recent appointment required me to build a CMS and front-end for the Australian Football League for the trade period. The CMS was built to allow editors to add news items, trades and free agency movements between the 18 clubs.  The front-end was to display the inserted items, but allow the end-user to filter them to given rules. Again, I chose Yii to build this as it’s a great framework for rapid development but also robust and a pleasure to work with.

After designing the database I started building the models, views and controllers before modifying the forms to match the experience required for an easy to use and intuitive CMS. For the main news feed section, the front-end results could be filtered with different filters such as club, date and result type, eg. Trade only or general comment. These filters work together for fine control over the results shown. As each filter is used, the results are returned and populated by AJAX requests with filters being cleared by selecting Live Feed. The challenging part here was deciding on how to have the filters work together in the browser. I ended up building the URL that would be passed in the AJAX request. Session could have worked also but was an issue in load balancing and caching as I’ll point out later.

The second view was a breakdown of trades in and trades out by club. The result for the view were pulled from the same data as in the main feed to save on repetition will adding content. Also with filters that load with AJAX this came together quickly. I’m impressed the way that Yii allows you to reload content for partial views with just a few extra lines of code writing the jQuery for you.

The third view shows the players that fans most want traded. This data is pulled from another website trademachine.afl.com.au which the results are user generated. I could build this view quickly also by implementing a second database that is easy to do in Yii.

The site went live on October 1 and the demand was a lot greater than I was expecting. This resulting in the server becoming overwhelmed and some slow or failed page loads. Being a little unprepared I quickly made new instances of the server and put them all under a load balancer to meet demand. Cloning servers and putting them under a load balancer couldn’t be easier than what is available with Rackspace. This was quick and saved me a lot of pain early on. I then spent some time adding and fine tuning the built-in caching that Yii provides. I had not used caching in Yii before but I was surprised at how easy and effective this is. Although the content should only be cached for 60 seconds on the live feed, the resources being used on the server were dramatically reduced.

This is an example of adding caching to a given part of the site with Yii:

if($this->beginCache('main-content', array(
            'duration'=>60,
            'varyByParam'=>array('filter','club','dateRange'),
            ))) {
                $this->renderPartial('_entryList', array(
                    'dataProvider'=>$dataProvider,
                ));
    $this->endCache(); }

 

This would cache the view to 60 seconds and the varyByParam parameter tells the cache to use GET variables filter, club and dataRange as values to take into account when caching to ensure that each unique request is cached and returned as expected. This is essential as the view has a single URL but the content will change depending on what GET variables are also supplied. If I was to use sessions to keep track of what filters the browser had selected, it would fail through the cache and load balancers so sessions here was not an option.

Overall this was a fun project that required me to provide a solution for an event that I have a lot of interest in. The result is an easy to use CMS with a great user experience in the front-end also.

 

 

Defcon 2012

0 Comments | This entry was posted on Aug 15 2012

Last month I was one of 15,000 people that attended the Defcon computer security convention in Las Vegas. It was a fantastic four day event with presenters talking about their findings and projects in regards to all things security.

Upon paying the $200 entry fee we were given our badge required for entry. This year’s badge was electronic and a puzzle in a way. Through onboard lights and light sensor the badges would communicate with each other as they past by. Also via a USB port we were encouraged to program some hacks so that they behaved differently.

Defcon 2012 Badge

 

One of the most interesting events in Capture The Flag where teams are set against each other to hack into their opponents servers and capture so called flags. Each team would harden their own servers before beginning to attack others. From what I could gather they do this non-stop throughout the event and the team who has gathered the most flags is deemed the winner.

My highlights were sitting in on talks by Kevin Mitnick on social engineering and Kevin Poulsen discussing the exploits he used to get up to in his past. Having read books by both presenters I was keen to see what they had to say.

I would love to attend again next year. Anyone feel like sponsoring my trip?

OSCON 2012

0 Comments | This entry was posted on Jul 26 2012

For some years now I’ve been inspired to travel to the United States to attend the Open Source Convention OSCON in Portland. I hoped to learn what new open source tools and resources developers from around the world are using to get their work done.

This year I made the journey and it was well worth it. About 3000 people attended over the five days and they are all so passionate about open source software. Most are developers but all are working with open source software in one way or another. Everyone is very willing to share their skills and experience.

A main focus of the conference was Open Stack (http://www.openstack.org/) which is an open source alternative to Amazon’s cloud services and the primary thing I hoped to learn about when leaving Melbourne. Open Stack is being embraced by many businesses and the founders from NASA have moved on to build their own businesses that use Open Stack technologies. As some speakers discussed there is still a lot of work to do before Open Stack has all the features required to be a complete cloud services platform but it’s looking very promising.

I also got a lot out of talks about PHP, Vim, Twitter’s Bootcamp and system performance tuning.

I also met lots of interesting people. Sitting down to lunch I found myself sitting next to Sebastion Bergman who created PHPUnit and on another day with an Open Stack founder Josh McKenty. I also met some Ubuntu community members and some people behind MySQL (and MariaSQL), Linode, Rackspace and many more.

Everyone is pushing the open source movement in the same direction. Forward. It was a fantastic event and I hope to attend next year. However tomorrow in day one of Defcon which I’m very excited about.

How to mount an HFSPLUS partition in Linux

26 Comments | This entry was posted on May 14 2012

Update: I and others have found hfsplus support has been unreliable on Linux so I have converted the drive to EXT4. Read this post’s comments for more.

I recently purchased a 2TB external drive for my Linux media centre but could not work out why I couldn’t write to the drive regardless of the permissions I had set.

When mounting the drive I would get the following error:

mount: warning: /media/drive seems to be mounted read-only.

This is the solution I found thanks to a collection of findings on the web by firstly installing hfsprogs:

sudo apt-get install hfsprogs // For Debian based distro
sudo fsck.hfsplus /dev/sdb2 // depending on your device and partition

Then try mounting the drive again as a normal user and hopefully it will work.