sabaTEL.net

Java, MySQL, Avaya

Deleting Old Log Files

Well I’m not going to say how useful it can be to analyse the log files of our application, in particular if you are troubleshooting a fault. Sometimes, those logs are big, very big, and not that flexible in terms of rolling over.
In those cases external tasks need to be setup to avoid running out of disk space.
I’m presenting a script that I’m using in order to do this task for Microsoft environments.


' DelOldLogs.vbs
' ***************************************************
' Script to delete files in a folder and sub-folders when files are x number of days old.
Path = "C:\jboss-1\server\default\log"
' * Alter this variable, if sub-folders should be processed
Subfolders = False
' * Alter this variable to set the how many days old the file should be before it is deleted.
' * Example : lifetime = date() – 10 . Will delete files that are 10 days old or more
Lifetime = date() – 9
FilesArray = Array()
set fso = createobject("scripting.filesystemobject")
SelectFiles path, lifetime, FilesArray, Subfolders
numDeleted = 0
for n = 0 to ubound(FilesArray)
on error resume next
FilesArray(n).delete true
on error goto 0
next
sub SelectFiles(sPath,vlifetime,FilesArrayToKill,bIncludeSubFolders)
on error resume next
set folder = fso.getfolder(sPath)
set files = folder.files
for each file in files
dtlastmodified = null
on error resume Next
dtlastmodified = file.datelastmodified
on error goto 0
if not isnull(dtlastmodified) Then
if dtlastmodified < vlifetime then
count = ubound(FilesArrayToKill) + 1
redim preserve FilesArrayToKill(count)
set FilesArrayToKill(count) = file
end if
end if
next
' If sub-folders are selected, call the procedure again to update the array with the contents.
if bIncludeSubFolders then
for each fldr in folder.subfolders
SelectFiles fldr.path,vlifetime,FilesArrayToKill,true
next
end if
end sub

view raw

DelOldLogs.vbs

hosted with ❤ by GitHub

It will be launched via a Scheduled Task running a batch file with this content:
cscript C:jboss-1serverdefaultDelOldLogs.vbs

June 13, 2011 Posted by | Microsoft | Leave a comment

PROFILER and EXPLAIN

If we want to improve the performance of a query, these 2 guys can help us in order to find out bottlenecks, but remember the most important job comes before, designing appropriately the database.


mysql> DESC City;
+————-+———-+——+—–+———+—————-+
| Field | Type | Null | Key | Default | Extra |
+————-+———-+——+—–+———+—————-+
| ID | int(11) | NO | PRI | NULL | auto_increment |
| Name | char(35) | NO | | | |
| CountryCode | char(3) | NO | | | |
| District | char(20) | NO | | | |
| Population | int(11) | NO | | 0 | |
+————-+———-+——+—–+———+—————-+
5 rows in set (0.00 sec)
mysql> SET PROFILING = 1;
Query OK, 0 rows affected (0.00 sec)
mysql> SELECT ID, Name, Population FROM City ORDER BY Name DESC;
| 2760 | Aba | 298900 |
| 3318 | Aalborg | 161161 |
| 3097 | Aachen | 243825 |
| 670 | A Coruña (La Coruña) | 243402 |
+——+————————————+————+
4079 rows in set (0.03 sec)
mysql> SET PROFILING = 0;
Query OK, 0 rows affected (0.00 sec)
mysql> SHOW PROFILES;
+———-+————+———————————————————-+
| Query_ID | Duration | Query
|
+———-+————+———————————————————-+
| 1 | 0.02274025 | SELECT ID, Name, Population FROM City ORDER
BY Name DESC |
+———-+————+———————————————————-+
1 row in set (0.00 sec)
mysql> SHOW PROFILE FOR QUERY 1;
+———————-+———-+
| Status | Duration |
+———————-+———-+
| starting | 0.000056 |
| checking permissions | 0.000007 |
| Opening tables | 0.000018 |
| System lock | 0.000009 |
| init | 0.000014 |
| optimizing | 0.000003 |
| statistics | 0.000008 |
| preparing | 0.000008 |
| executing | 0.000002 |
| Sorting result | 0.020042 |
| Sending data | 0.002536 |
| end | 0.000006 |
| query end | 0.000003 |
| closing tables | 0.000009 |
| freeing items | 0.000012 |
| logging slow query | 0.000002 |
| cleaning up | 0.000007 |
+———————-+———-+
17 rows in set (0.00 sec)

view raw

gistfile1.sql

hosted with ❤ by GitHub

Times described at
http://dev.mysql.com/doc/refman/5.5/en/general-thread-states.html


mysql> EXPLAIN SELECT ID, Name, Population FROM City ORDER BY Name DESC;
+—-+————-+——-+——+—————+——+———+——+——+—————-+
| id | select_type | table | type | possible_keys | key | key_len |
ref | rows | Extra |
+—-+————-+——-+——+—————+——+———+——+——+—————-+
| 1 | SIMPLE | City | ALL | NULL | NULL | NULL |
NULL | 4079 | Using filesort |
+—-+————-+——-+——+—————+——+———+——+——+—————-+
1 row in set (0.00 sec)

view raw

gistfile1.sql

hosted with ❤ by GitHub

June 7, 2011 Posted by | MySQL | Leave a comment

InnoDB Isolation Levels, Multi-Versioning and Concurrency

When multiple clients run transactions concurrently, three problems that may result are dirty reads, non-repeatable reads and phantoms.

  • A dirty read is a read by one transaction of uncommitted changes made by another.
  • A non-repeatable read occurs when a transaction performs the same retrieval twice but gets different results each time.
  • A phantom is a row that appears where it was not visible before.

InnoDB implements 4 isolation levels that control the visibility of changes made by one transaction to other concurrently executing transactions:

  • READ UNCOMMITTED allows a transaction to see uncommitted changes made by other transactions. Dirty reads, non-repeatable reads and phantoms.
  • READ COMMITTED allows a transaction to see changes made by other transaction only if they’ve been committed. Non-repeatable reads and phantoms.
  • REPEATABLE READ ensures that a transaction issues the same SELECT twice, it gets the same result both times.
  • SERIALIZABLE completely isolates the effects of one transaction from others.

InnoDB operates by default in REPEATABLE READ mode. The isolation level can be setup at startup time:
[mysql]
transaction-isolation = READ-COMMITTED
But it can also be setup dynamically for a running server to different levels:
SET GLOBAL TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;

June 6, 2011 Posted by | MySQL | Leave a comment

InnoDB Engine

  • Similar to the MyISAM and MERGE Engines seen so far. The interesting features appear with the disk management. Each InnoDB table is represented by and .frm format file as well as data and index storage in the InnoDB Tablespace. By default a single tablespace will be used by different tables.
  • Supports transactions with commit and rollback. It provides full ACID.
  • Provides auto-recovery after a crash of the MySQL server or the host where it runs.
  • Performance wise it seems better than the last 2 Engines, but here you can get deadlock.

June 2, 2011 Posted by | MySQL | Leave a comment