RSS

Yii2: Second Generation Yii

0 Comments | This entry was posted on Jul 01 2014

I found enough time recently to finally look into Yii2. I decided that a good test project would be to build a crypto currency exchange tracker. It would download latest prices of all markets from both Cryptsy and Mintpal and then display the data in charts so I could quickly scan trends of all currencies.

Yii2 and it’s dependencies can be installed and managed through Composer which I enjoy. It prevents you from needing to keep any third party packages in version control and makes installs, upgrades and deployments easier. The Yii2 documentation is again great and the community is already solid. Any questions I had that were Yii2 specific were answered on the forum in good time. Some things are different and migrating projects from Yii 1.x to 2.x will take a lot of work. Yii2 uses namespaces and this means namespaces need to be declared at the top of views and other files that wasn’t previously necessary. Getting instances of records is slightly different and was changed several times during Yii2’s evolution. However this is stable now.

Many things are still the same. Migrations, scaffolding, commands and nearly everything else is the same. In my opinion, Yii2 is still the best PHP framework and I can’t wait to start a production project with it. Yii2 is still beta but the code base has mostly settled with only bug fixes remaining. My next task is to incorporate AngularJS into my Yii projects.

Setting Up Development Environments With Vagrant and Ansible

0 Comments | This entry was posted on Feb 19 2014

One of the reasons I love running Linux on my main laptop/workstations is that I have an ideal environment to develop web projects. However there’s been many developments in software that moves away from this model which I have grown to love, and that is running your dev environments in virtual machines.

Instead of running Apache (or Nginx), MySQL and PHP natively on my dev machine, I have found it’s now easier to setup and run dev environments in virtual machines that are configured specifically for a given project, which can be automated through server management scripts. Initially this sounds like additional work, and it is but it has several advantages:

  • Custom environments for each project
  • Easily deployable for other developers in your team
  • No knowledge required for other team members.
  • Scripts can be reused for staging and development environments.

What are Vagrant and Ansible:

Vagrant is software that allows you to easily build reproducible development environments for various operating systems. It runs on top of other virtual machine platforms such as Virtualbox but, among other things, creates a sync drive that is accessible to your local file system, allowing you to use you IDE as you would normally without the need to transfer files to the machine.

Ansible, like Puppet or Chef is a server management scripting language. However the learning curve is a lot simpler and doesn’t require any software running on the remote servers. It configures the hosts over ssh.

By combining Vagrant with Ansible, it’s very easy to create development environments for developers who are running any common operating system within minutes without having to manually configure their dev environments to suit their operating system.

I have created Vagrant/Ansible setup script which can be found on Github. This will configure a development virtual machine that will have installed the latest versions of Nginx, MariaDB and PHP on Debian 7.

I think it’s worthwhile for any development teams to investigate using virtual machines like this, especially where complex environments are required.

Preparing for Massive Load with OpenStack

0 Comments | This entry was posted on Feb 25 2013

In November last year I again updated and hosted the website for the NAB Australian Football League Draft Tracker and flew up to the Gold Coast to sit in on the event to ensure it all went well. The website (http://drafttracker.afl.com.au/) was built as a live tracker so the public can watch the picks as they happen.

It was designed to be lightweight to both server and browser so that any client requests only pulled in all site assets on the initial page load and then tiny JSON files every ten seconds looking for updates [1].  Adding drafted players as they happened by the admin updated records in the database via PHP which would create static files (JSON) for clients to pull down to update the page. NGINX was used as the webserver. All this allowed the server to run with minimal effort during the busy night.

However the trade period weeks earlier showed that the online interest in the AFL had lifted significantly and that I should prepare further to ensure that load was not a problem. As I host on Rackspace with their OpenStack cloud hosting I was able to take advantage of their versatile system to easily build a solution to meet potential demand. I created an image of the server and made four new server instances from it which were to become slaves. I then modified the site so that updates on the master would copy (via rsync) any changed JSON files to the slaves. Then I created a load balancer within the same interface with a few clicks and added the four slaves behind it before finally pointing the domain name to the load balancer’s IP address.  Another benefit of this design was that the site administrator could make updates to the site from an instance that was experiencing no load and therefore unhindered by too much traffic.

The draft went for about 1.5 hours and saw 100,000 unique visitors, each of which would poll the site every ten seconds. Monitoring the servers showed that my solution was complete overkill and probably only the one server was enough. But it’s better to plan for the worst and it was a great experience putting the solution together.

COSTING

Each of the four slaves ran with 512MB memory which costs $0.03 per hour, $0.15 total (including master). The load balancer costs a base of $0.015 per hour but scaled up per number of connections. Therefore for the 1.5 hours the expense of the set-up would have cost just a few dollars. Of course I had this set-up running for quite a few days beforehand though but overall the costing is negligible.

[1] The site was designed before the days of NodeJS and websockets not solution for older browsers.

OSCON 2012

0 Comments | This entry was posted on Jul 26 2012

For some years now I’ve been inspired to travel to the United States to attend the Open Source Convention OSCON in Portland. I hoped to learn what new open source tools and resources developers from around the world are using to get their work done.

This year I made the journey and it was well worth it. About 3000 people attended over the five days and they are all so passionate about open source software. Most are developers but all are working with open source software in one way or another. Everyone is very willing to share their skills and experience.

A main focus of the conference was Open Stack (http://www.openstack.org/) which is an open source alternative to Amazon’s cloud services and the primary thing I hoped to learn about when leaving Melbourne. Open Stack is being embraced by many businesses and the founders from NASA have moved on to build their own businesses that use Open Stack technologies. As some speakers discussed there is still a lot of work to do before Open Stack has all the features required to be a complete cloud services platform but it’s looking very promising.

I also got a lot out of talks about PHP, Vim, Twitter’s Bootcamp and system performance tuning.

I also met lots of interesting people. Sitting down to lunch I found myself sitting next to Sebastion Bergman who created PHPUnit and on another day with an Open Stack founder Josh McKenty. I also met some Ubuntu community members and some people behind MySQL (and MariaSQL), Linode, Rackspace and many more.

Everyone is pushing the open source movement in the same direction. Forward. It was a fantastic event and I hope to attend next year. However tomorrow in day one of Defcon which I’m very excited about.

runQuery: A New Tool To Query MySQL Databases

0 Comments | This entry was posted on May 07 2012

Recently I have been working on sites that don’t allow ssh access and I always find installing phpMyAdmin overkill and unnecessary for my needs. Therefore I wrote a single file script that you upload to your webserver, login to the database and immediately start writing queries.

It allows you to write complex queries with joins or just simple inserts, updates or deletes and returns the result set quickly with the number of rows found or affected on a page that’s clear and easy to read.

 

 

In the future I would like to add support for PostgreSQL and the ability to add or modify the data through a form.
The code is available to download from github. Please try it and let me know what you think.

Why Use A No-SQL Database Like Redis

0 Comments | This entry was posted on Dec 01 2011

For the last three months I have been working on different websites that I inherited that rely heavily on Redis. Redis is another no-sql database that uses key/value pairs to store data but does not give you the flexibility to write queries like you’re used to with relational databases like MySQL and Postgres.

Interestingly, these sites also use MySQL. I never had a chance to use or learn about No-SQL databases beforehand but the idea of using two types of databases for one application sounded like a convoluted and unnecessary solution. However, the more I use Redis (especially in these applications) the more I love it.

Redis is used in these applications for caching. When a request it made, rather than PHP sending queries to MySQL, it requests the data directly from Redis, which is pulled straight from the RAM. This results is a much faster response time and requires less resources from the server.

As there are already some good tutorials on what Redis is and how to use it, I will show you some great tricks I found through my travels that I did not see in the general documentation. Start the Redis client (redis-cli) and try the following two tricks:

List All Keys In The Database:

keys *

Show Variable Type:

type <variable-name>

Monitor Queries:
Monitor the queries being sent to Redis by using telnet to login to redis on the port number it’s running (default 6379) and type monitor. This is very helpful if the queries are being sent from an application and you need to debug exactly what’s the query is.

Running Multiple Instances Of Redis:
As I am running multiple applications that require Redis, I needed to learn how to run multiple versions of Redis. This is because you don’t define separate databases like you would with MySQL for example. There is no logins and no way to clearly separate data between applications. An excellent description on how to run multiple version of Redis can be found at chrislaskey.com.

Start Learning:
To get a better understanding of Redis I recommend using the online practical guide found at try.redis-db.com. This guide explains what different types of variables are available and how to access them.

Conclusion:
Redis has shown me what’s available in no-sql databases and that relational databases may not always be the answer. I can see that as I use Redis more in my own projects I will find that it’s useful for other purposes.  One possibility is the storing of variables that I may have previously put into sessions.

Yii Framework Issue With Nginx And SSL

0 Comments | This entry was posted on Oct 09 2011

During the process of moving my websites from one host to another and from Apache to Nginx I came across an issue that had me frustrated yet intrigued. I have a Yii application that I use for my business that I ran over SSL. This was working fine under Apache and for the most part under Nginx. The problem would only come about in Nginx when a form was posted, the browser would be redirected from HTTPS to HTTP mode. Why was this?

After some Google searching, Yii core framework investigation and some trial and error I found that Yii relies on what I determined is a non standard HTTP header ‘HTTPS’ with a value of ‘on’. Nginx does not send this header and on a form post Yii would make the assumption that the browser was in non HTTPS mode.

The core Yii method that determined the protocol is:

public function getIsSecureConnection()
{
    return isset($_SERVER['HTTPS']) && !strcasecmp($_SERVER['HTTPS'],'on');
}

This can be overcome by adding a parameter into the Nginx virtual host record but I see this as a workaround and not a real solution. I thought a simple change from the non standard ‘HTTPS’ header to ‘SERVER_PORT’ (checking that it’s set to 443), which is sent by both Apache and Nginx would be a better solution. I have sent a bug report off to the Yii project with my suggestion so maybe we’ll see it in a future release.

Update: 18 October 2011
My suggested fix was added to the Yii code base but was reverted back out because a site running on port 443 does not necessarily mean it’s running over SSL. There is no standard header that browsers send when running over SSL so therefore a non-ideal solution must be used. I believe the best solution is to add the HTTPS header to the Nginx config as suggested.

Keeping Your LAMP Server Up To Date With Dotdeb

0 Comments | This entry was posted on Sep 13 2011

I have been maintaining Debian based Linux servers now for some years and at times I find it frustrating that the latest versions of my favourite packages are not available yet because of the delay in getting the newest version into the selected repository. This leaves you needing to build the program from source.

I recently discovered a project designed to get around this problem. Dotdeb is a repository for Debian systems that have the latest versions of PHP, MySQL, Redis, Apache, Nginx and other common web type packages ready to install or upgrade. With doing little more than adding the Dotdeb repository URL to your sources.list file and updating, the newest version of each package are immediately available.

I love now that I can easily have the most up to date packages with minimal fuss and leaving me with time to get back to development.

To find out more visit the project’s website.

Four Ways To Ease Facebook Application Development

0 Comments | This entry was posted on Mar 24 2010

For the last two months I have been developing a couple of Facebook applications for clients. Developing new apps for Facebook can be difficult and very time consuming in the fact that the applications need to be hosted on a publicly available server rather than in your standard dev environment. This is a pain for serveral reasons including the need to upload files each time a change is made and that you don’t want php or debugging messages being displayed.

Therefore you will want to send the debug and error messages elsewhere that you can easily watch. These tips are not complicated and I would hope that they are used by most developers at least some of the time for all projects.

1. Custom logger

Rather than printing debug messages to the screen I suggest that you send them to a custom log file which you can watch as new entries are added. Create a function similar to the following:

function logger( $msg )
{
   file_put_contents( 'log.txt', date( 'Y-m-d H:i:s' ) ." $msg\n", FILE_APPEND );
}

Once this function has been defined, you can easily send debugging messages to the log like:

logger( 'name is set: '. $name );

The new string and the set variable will be appended to the end of the log.

2. PHP error logs

When working in your own dev environment it is a must that you have errors sent directly into the browser. This way any warnings or fatal errors are immediately shown to you and you can fix and move on. This is undesirable (for several reasons) for a publicly available site so you need to log these to a file which you should also watch.

There are several methods to set PHP logging:

Enabling PHP error logging through Apache config

This in itself can be added in two places. The first option should only be available if you have root privileges to the server. Find the Apache virtualhost record (apache2ctl -S is handy for this) and set add the following:

php_value error_reporting 6143
php_flag log_errors on
php_value error_log /var/log/apache2/vhosts/yourdomain-php_error.log

The second option is to create a file named .htaccess in the web root directory and add the same options. This may require AllowOverride to be set to All in the virtual host record for this to work.

Enabling PHP error logging with PHP

The same options can be added directly into your scripts with like this:

ini_set( 'error_reporting', 6143 );
ini_set( 'log_errors', 'on' );
ini_set( 'error_log', '/var/log/apache2/vhosts/yourdomain-php_error.log' );

If you plan to develop for a long period it would be best to set the log file to go into the /tmp directory so it doesn’t cause hard disk issues on your server.

3. Apache logs

Apache logs are also very useful in developing Facebook applications. By watching these files you can see when and what Facebook is downloading from your server. I found this extremely useful when making Ajax calls to the server to see what $_GET variables were being sent.

These log files can usually be found somewhere in /var/log/apache2 but it may be easier again to check with apache2ctl -S to see exactly where the log is being saved.

Watching the logs

The best way to watch these log files is to SSH into the server and follow the logs with the tail command with it’s -f follow parameter.

tail -f /var/log/apache2/vhosts/yourdomain-php_error.log

By following the file you don’t need to keep closing and re-opening the file to see new entries.

4. Rsync

The best tool to upload any files that hae been changed is to use the rysnc command. This tool will compare your local files with the remote ones and upload any changes found. This beats the hell out of using FTP. I usually create a script which I run with looks like this:

rsync -r --verbose ./public_html/* username@hostname:/var/www/yourdomain.com/public_html

This will continually prompt for a password but this can be overcome by setting ssh keys. Follow this tutorial on how to set-up ssh keys.

Conclusion

I hope this helps others develop and debug Facebook applications. If you have further hints or ideas, I would love to hear them.

Website Build With Flash/HTML Integration

0 Comments | This entry was posted on Dec 15 2009

Last week, we at Sputnik Agency pushed live the site we had been building for our parent company Kit Digital. What I like most about this website is that it required us to include dynamic Flash navigation that had little overhead and to have the flash headers and html update seamlessly without reloading the page.

Project platform

We decided to go with WordPress as the project required a good CMS that was easy for the client to use and was quick and easy to develop.

Flash navigation with little overhead

There are two main flash headers that include navigation. This navigation needed to be dynamic in the way that if new pages were created in WordPress, the flash navigation needed to include these also. Having the navigation work dynamically this way can create an undesirable overhead as generally this would require mulitple calls to the database.

I overcame this by creating WordPress plugins that used hooks to create XML files when pages were added, updated or removed. These XML files included hierarchical page information required by the navigation. Therefore when a page is loaded in the front-end, the Flash would just read the XML file rather than force PHP to make database query requests. This resulted in the pages loading faster and reducing server overhead.

One other benefit of using SWFAddress is that although page loading does not occur, you can still click back through the pages of content you have loaded. The browser will not behave this way with standard content replacement using AJAX.

Seamlessly update flash header and update content

The next task was to allow links (whether clicked from the Flash navigation or HTML navigation) to update the Flash navigation and HTML without reloading the page. This seemed like a very difficult task and something I had not seen before but we managed this using SWFAddress.

Using the SWFAddress Javascript library we could update the URL in the address bar which would trigger both the HTML and Flash to change their behaviour. Once the the change was caught I used jQuery to make a request to another custom WordPress plugin that would pull page content from the database and then update the HTML without needing to reload the page. Clicking on the Flash links did the same.

The end result is a very sleek and fast loading website where the content is completely CMS driven.

To see these pages in action, visit these links: VX Platform, Global.