Empate com gostinho de derrota. Fomos muito melhores. Sem contar o gol do Dentinho mal anulado e o gol do São Paulo que foi marcado pelo Washington em posição duvidosa.
Perdemos 5 pontos em dois jogos. Muita coisa. Pra mim encerrarram-se nossas chances da Tríplice Coroa.
Agora é ampliar a lista e manter as peças chave para a Libertadores. Essse título tem que vir e é obrigação sim, Sr. Andres Sanches. Tem que ganhar sim.


Data: domingo; 27/09/2009
Local: estádio do Morumbi, em São Paulo (SP)
Árbitro: Ricardo Marques Ribeiro (MG)
Auxiliares: Carlos Augusto Nogueira Junior e Emerson Augusto de Carvalho (SP)
Cartões amarelos: Defederico, Jorge Henrique, William (COR), Dagoberto, Richarlyson, Washington (dois) (SP)
Cartões vermelhos: Washington (SP)
Gols: Ronaldo, aos 20 minutos do primeiro tempo, Washington, aos 25 min segundo tempo

Bosco; Renato Silva, André Dias e Miranda; Jean, Richarlyson (Marlos), Hernanes, Jorge Wagner (Hugo) e Junior Cesar; Borges (Washington) e Dagoberto
Técnico: Ricardo Gomes

Felipe; Alessandro, William, Paulo André e Marcinho; Marcelo Mattos, Jucilei e Defederico (Moradei); Jorge Henrique (Souza), Dentinho e Ronaldo (Bill)
Técnico: Mano Menezes



Sem comentários.


Data: domingo; 20/09/2009
Local: estádio do Pacaembu, em São Paulo (SP)
Árbitro: Marcelo de Lima Henrique (Fifa-RJ)
Auxiliares: Hilton Moutinho Rodrigues (Fifa-RJ) e Dibert Pedrosa Moisés (RJ)
Público: 35.748 torcedores
Renda: R$ 1.209.559,50
Cartões amarelos: Elias, Marcelo Mattos (COR); Leandro Euzébio, Fernando (GOI)
Gols: Iarley, aos 7min, Fernandão, aos 23min do primeiro tempo; Iarley, aos 5min, Dentinho, aos 28min, João Paulo, aos 34min do segundo tempo

Felipe; Balbuena, Chicão (Bill) e Diego; Alessandro, Marcelo Mattos, Jucilei, Elias e Marcelo Oliveira (Marcinho); Dentinho e Ronaldo
Técnico: Mano Menezes

Harlei; Ernando, Leandro Euzébio e João Paulo; Vítor, Fernando (Gomes), Everton, Léo Lima (Ramalho) e Júlio César (Zé Carlos); Fernandão e Iarley
Técnico: Hélio dos Anjos


Coritiba 1 x 1 Corinthians

Um empate com gosto de derrota pois tinhamos a chance de encostar no líder do campeonato.


Edson Bastos, Rodrigo Heffner (Márcio Gabriel), Cleiton, Dirceu e Renatinho; Jaílton (Bruno Batata), Leandro Donizeti, Pedro Ken e Marcelinho Paraíba; Thiago Gentil (Carlinhos Paraíba) e Ariel
Técnico: Ney Franco

Felipe; Balbuena, Chicão, Paulo André e Diego; Marcelo Mattos, Marcelo Oliveira (Bill), Elias e Jucilei (Moradei); Dentinho e Souza (Alessandro)
Técnico: Mano Menezes

Data: 16/09/2009 (quarta-feira)
Local: estádio Couto Pereira, em Curitiba (PR)
Árbitro: Elmo Alves Resende Cunha (GO)
Auxiliares: Alessandro Rocha de Matos (Fifa-BA) e Marco Antonio Martins (SC)
Cartões amarelos: Cleiton, Renatinho, Marcelinho Paraíba, Jaílton (CTA), Jucilei, Paulo André (COR)
Gols: Jaílton, aos 27 min do primeiro tempo; Dentinho, aos 5 min do segundo tempo


Cloud Computing Best Practices

Cloud Computing Best Practices

Some of the key things to think about when putting your application on the cloud are discussed below. Cloud computing is relatively new, and best practice is still being established. However we can learn from earlier technologies and concepts such as utility compute, SaaS, outsourcing and even internal enterprise centre management, as well as from experience with vendors such as Amazon and FlexiScale.

Licensing: If you are using the cloud for spikes or overspill make sure that the products you want to use in the cloud can be used in this way. Certain products restrict their licenses to be used from a cloud perspective. This is especially true of commercial Grid, HPC or DataGrid vendors.

Data transfer costs: When using a provider like Amazon with a detailed cost model, make sure that any data transfers are internal to the provider network rather than external. In the case of Amazon, internal traffic is free but you will be charged for any traffic over the external IP addresses.

Latency: If you have low latency requirements then the Cloud may not be the best environment to achieve this. If you are trying to run an ERP or some such system in the cloud then the latency may be good enough but if you are trying to run a binary or FX Exchange then of course the latency requirements are very different and more stringent. It is essential to make sure you understand the performance requirements of your application and have a clear understanding of what is deemed business critical.

One vendor who has focused on attacking low latency in the cloud is GigaSpaces and so if you require cloud low latency then these are one of the companies you should evaluate. Also for processing distributed data loads there is the map reduce pattern and Hadoop. These type of architectures eliminating the boundaries created by scale-out database based approaches.

State: Check whether your cloud infrastructure providers have persistence. When an application is brought down and then back up all local changes will be wiped and you start with a blank slate. This obviously has ramifications with instances that need to store user or application state. To combat this on their platform Amazon delivered EC2 persistent storage in which data can remain linked to a specific computing instance. You should ensure you understand the state limitations of any Cloud Computing platform that you work with.

Data Regulations: If you are storing data in the cloud you may be breaching data laws depending where your data is stored i.e. which country or continent. To combat this Amazon S3 now supports location constraints, which allow you to specify where in the world to store data for a bucket and provides a new API to retrieve the location constraint for an existing bucket. However if you are using another cloud provider you should check where your data is stored.

Dependencies: Be aware of dependencies of service providers. If service ‘y’ is dependant on ‘x’ then if you subscribe to service ‘y’ and service ‘x’ goes down you lose your service. Always check any dependencies when you are using a cloud service.

Standardisation: A major issue with current cloud computing platforms is that there is no standardisation of the APIs and platform technologies that underpin the services provided. Although this represents a lack of maturity you need to consider how locked in you are when considering a Cloud platform or migrating between cloud computing platforms will be very difficult if not impossible. This may not be an issue if your supplier is IBM and always likely to be IBM, but it will be an issue if you are just dipping your toe in the water and discover that other platforms are better suited to your needs.

Security: Lack of security or apparent lack of security is one of the perceived major drawbacks of working with Cloud platform and Cloud technology. When moving sensitive data about or storing it in public cloud it should be encrypted. And it is important to consider a secure ID mechanism for authentication and authorisation for services. As with normal enterprise infrastructures only open the ports needed and consider installing a host based intrusion detection systems such as OSSEC. The advantage of working with an enterprise Cloud provider, such as IBM or Sun is that many of these security optimisations are already taken care of. See our prior blog entry for securing n-tier and distributed applications on the cloud. Be sure to check out Amazon’s new VPC inititative as well as looking at VPN-Cubed by CohesiveFT if you have to tie together public Clouds with private applications, services or infrastructure. If you need to keep costs down and evaluate free then look at OpenVPN.

Compliance: Regulatory controls mean that certain applications may not be able to deployed in the Cloud. For example the US Patriot Act could have very serious consequences for non-US firms considering U.S. hosted cloud providers. Be aware that often cloud computing platforms are made up of components from a variety of vendors who may themselves provide computing in a variety of legal jurisdictions. Be very aware of the dependencies and ensure you factor this into any operational risk management assessment. See also my prior blog entry on this topic

Quality of service: You will need to ensure that the behaviour and effectiveness of the cloud application that you implement can be measured and tracked both to meet existing or new Service Level agreements. We have discussed previously some of the tools that come with this option built in (GigaSpaces) and other tools that provide functionality that enable you to use this with your Cloud Architecture (RightScale, Scalr etc). Achieving Quality of Service will encompass scaling, reliability, service fluidity, monitoring, management and system performance.

System hardening: Like all enterprise application infrastructures you need to harden the system so that it is secure, robust, and achieves the necessary functional requirements that you need. See my prior blog entry on system hardening for Amazon EC2.

Content adapted from my book “TheSavvyGuideTo HPC, Grid, DataGrid, Virtualisation and Cloud Computing” available on Amazon.

Career Changing

That's not the first time I change the ways of my career. Couple years ago, I gave up my career in financial companies (retail banking services) to start from scratch in the IT world. Studied networking computers and got CCNA certification but didn't caught a job due the lack of experiemce in IT. So I had an opportunity to start working on development. First Java and then VB.NET. After about two years, I'm changing my carrer again, now using the acquired experience in IT with my wide knowledge in banking services, working like a Solutions Architect for banking & back-office. To do that, I'll have to get the PMP certification. from the beginning of 2010 will be focused on that. For now, I'm managed to finish my post grad in Software Engineering SOA-based. I intend to prepare the way for the PMP cert, doing my course conclusion work related to SOA Project Management, and also, to use my ITIL v2 knowledge.
This way, from today, Bala's Blog will present blog posts related to Project Management (but not only).

And if I can give an advice to someone else, I say:

Don't be afraid. Go ahead! Believing yourself is the first and main step to do.


Corinthians 2 x 1 Santos

Na festa dos 99 anos nada melhor que uma vitória e de virada sobre o Peixe. Sensacional! E estamos na cola do G4.


Felipe; Jucilei, Chicão, Paulo André e Balbuena; Moradei (Marcelo Oliveira), Elias e Boquita; Jorge Henrique, Dentinho (Henrique) e Souza (Bill)
Técnico: Mano Menezes

Felipe; George Lucas, Fabão, Eli Sabiá e Léo, Emerson (Pará), Rodrigo Mancha, Róbson (Germano), Madson (Neymar) e Paulo Henrique; Kleber Pereira
Técnico: Vanderlei Luxemburgo

Data: 02/09/2009 (quarta-feira)
Local: Pacaembu, em São Paulo (SP)
Árbitro: Guilherme Cereta de Lima (SP)
Auxiliares: Nilson de Souza Monção (SP) e Giovani Cesar Canzian (SP)
Renda: R$ 821.268,00
Público:25.645 pagantes
Cartões amarelos: Emerson, Robson, Felipe, Fabão (SAN), Boquita (COR)
Cartão vermelho:
Gols: Eli Sabiá, aos 6 min do segundo tempo, Bill aos 34 min do segundo tempo, Chicão, aos 43 minutos do segundo tempo


How to Build a Cloud Without Using Virtualization

How to Build a Cloud Without Using Virtualization

Leveraging Java EE and dynamic infrastructure to enable a shared resource, on-demand scalable infrastructure – without server virtualization

Many pundits and experts allude to architectures that are cloud-like in their ability to provide on-demand scalability but do not – I repeat do not – rely on virtualization, i.e. virtual machines. But rarely – if ever – is this possibility described. So everyone says it can be done, but no one wants to tell you how.

Maybe that’s because it appears, on the surface, to not be cloud. And perhaps there’s truth to that appearance. It is more pseudo-cloud than cloud – at least by most folks’ definition of cloud these days – and thus maybe you really can’t do cloud without virtualization. There’s also the fact that there is virtualization required – it’s just not virtualization in the way most people use the term today, i.e. equating it with VMware, or Xen, or Hyper-V.

But it does leverage shared resources to provide on-demand scalability, and that’s really what we’re after with cloud in the long run, isn’t it?


One of the tenets of cloud is that scalability is achieved through the use of shared resources on-demand. Anyone who has deployed a Java EE environment knows that it is, above all else, a shared environment. The Java EE application server is essentially a big container, and it performs many of the same functions traditionally associated with virtualization platforms such as abstraction from the operating system, it receives requests via the network and hands them out to the appropriate application, etc… It’s not a perfectly analogous relationship, but the concept is close enough.

So you have a shared environment in which one or more applications might be deployed. The reason this is cloud-like is that just because an application is deployed in a given application server doesn’t mean it’s running all the time. In fact, it doesn’t even need to be loaded all the time, just deployed and ready to be “launched” when necessary.

imageIn order to provide the Java EE “cloud” with mobility we employ a file virtualization solution to normalize file access across a shared, global namespace. Each application server instance accesses the same application resource packages from the normalized file system, thus reducing the storage requirements on the individual server platforms.

The application delivery controller (a.k.a. load balancer plus) virtualizes the applications to provide unified access to the applications regardless of which application server instance they may be launched on. The application delivery controller, assuming it is infrastructure 2.0 capable, is also responsible for the implementation of the “on-demand scalability” necessary to achieve cloud-like status.

The “secret sauce” in this architectural recipe is the ability to integrate the application delivery controller (hence the requirement that it be Infrastructure 2.0 capable) and the application server infrastructure. This integration is really a collaboration that enables a controlling management application to instruct the appropriate application server to launch a given application upon specified conditions – typically upon reaching a number of connections that, once surpassed, is known to cause degradation of performance or the complete depletion of available resources.

Because the application delivery controller is mediating for the applications, it has a view of both the client-side and server-side environments, as well as the network. It knows how many connections are currently in use, how much bandwidth is being used, and even – when configured to do so – the current capacity of each off the application servers. And it knows this on a per “network virtual server” which generally corresponds to an application.

All this information can be retrieved by the controlling management application via the application delivery controller’s service-enabled control plane, a.k.a. API (either RESTful or SOAPy, as per the vendor’s implementation). The controlling management application uses this information to decide when (on-demand) to launch a new instance (or unload an instance) of an application on one of the application servers. Java EE application servers are essentially infrastructure 2.0 capable, as well, and provide several methods of remote control that enable the ability to remotely control an application and its environment.

Once the controlling management application has successfully launched (or unloaded) the application in the appropriate application server, the application itself becomes part of the process. A few lines of code effectively instrument the application to register – or deregister as the case may be – itself with the application delivery controller using the aforementioned control-plane. Once the application is registered, it is put into rotation and capacity of the application is immediately increased appropriately. On-demand, using otherwise idle-resources, as required by the definition of “cloud.”

Wash. Rinse. Repeat.


Dynamic infrastructure, such as an infrastructure 2.0 capable application delivery controller, is a necessary component of any successful on-demand architecture, whether “real cloud” or “pseudo cloud.” It is the ability of such infrastructure to interact with and integrate with management and application infrastructure that enables the entire architecture to affect an on-demand scalable posture capable of utilizing shared resources – whether virtualized or not. Without a dynamic infrastructure this architecture would still be possible; one could manually perform the steps necessary to launch when and where necessary and then add the application to the application delivery controller, but that would incur additional costs and the human latency required to coordinate actions across multiple teams is, well, exceedingly variable – especially on the weekends.

Certainly the benefits of a pseudo-cloud are similar, but not exactly the same, as that of a “real” cloud. You do get to take advantage of shared and quite possibly idle resources. You do get the operational efficiencies associated with automation of the provisioning and de-provisioning of application instances. And you also get the reduction in costs from leveraging a shared storage system. If business stake-holders are charged back only what they use, then you’re further providing value in potentially reducing the physical hardware necessary to ensure resources are available for specific applications, much of which is often wasted by the over-provisioning inherent in such traditional deployments. That reduces the CapEx and OpEx, which is yet another touted benefit that is desired by those exploring both public and private cloud.

This isn’t a simple task. The sharing of resources – particularly in controlling thresholds per application – is more difficult without virtualization a la VMware/Xen/Hyper-V. It’s not nearly as easy as just virtualizing the applications and it requires a bit more planning in terms of where applications can be deployed, but the orchestration of the processes around enabling the on-demand capability is no more or less difficult in this pseudo-cloud implementation as it would be in a real-cloud scenario.

It can be done, and for some organizations unwilling for whatever their reasons to jump into virtualization, this is an option to realize many of the same benefits as a “real” cloud.

Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share

Related blogs & articles:

3 Patterns from SOA Design Patterns by Thomas Erl

InfoQ: 3 Patterns from SOA Design Patterns by Thomas Erl

The first draft of SOA Design Patterns had 60 patterns that were reviewed by more than 100 selected SOA specialists from all over the world. During the same time the draft was subject to public review on soapatterns.org. The SOA community was invited to contribute with their own patterns, ones they had used and had been validated in production. The response led to a collection of 34 new patterns. The end result was a catalog of 85 individual and compound patterns plus 28 candidate patterns – as today - subject to further review and validation by the SOA community. These patterns can be used as guidelines for solid SOA design and implementation. In this article we present 3 Inventory Governance Patterns from chapter 10 of the book: Canonical Expression, Metadata Centralization, and Canonical Versioning.


Dia do Corinthians

O Timão completa 99 anos hoje. Vamos comemorar. Coloque sua bandeira na janela, vá ao trabalho ou a escola usando sua camisa, coloque o hino pra tocar na sua casa, bar , enfim demonstre mais ainda sua paixão pelo Coringão hoje. E agora vai ser assim todos os anos no dia 01 de Setembro. "Dia do Corinthians".

Também começam hoje as comemorações do ano do Centenário.

Acesse a página oficial do Corinthians aqui.