Is your License include HA and Enterprise Grid options?
Are you planning for implementation of HA and GRID or HA alone?
Is the product is PowerCenter alone or it's PowerCenter and Data Quality? If both, then no of application services define the CPU/cores requisites.
Answers to the above questions would define the required Informatica environment setup that conforms to the license limits.
I think these questions are good but not necessarily to be given a thought for my question:
As I mentioned, the non-production environment license is UNLIMITED CPU Core based.
Keeping this in mind, "in general", what are the pros & cons of maintaining the SDLC phases in individual domains on separate nodes when compared to that of maintaining the same in individual domains but on a single node?
I found something related at https://kb.informatica.com/faq/7/Pages/14/315297.aspx?myk=315297
a) We have PC AE & DQ SE.
b) We want to separate them. I initiated a separate conversation at:
c) The PC AE has HA, Grid options. But we setup fail-over mechanism only in PROD env. And, that too we do it some other means - i.e. we do not make use of HA option of PC.
Having separate boxes for everything means more machine and maintenance costs, no discussion.
On the other hard you're more flexible: for example, you can test an upgrade from 10.1 to 10.5 (as soon as 10.5 is available) on one box without affecting any other box / environment.
This applies not only to Informatica upgrades but also e.g. to DBMS client upgrades, OS upgrades, and the like.
Furthermore having more than one environment on one box means you can't use Windows. Now I'm definitely no fan of Windows in general, but that's just a technical matter of fact which your customer should be aware of.
Also, if the "general" box for non-PROD environments breaks down, all three environments are affected and not available. If each environment sits on its own box, then only that one environment will be unavailable if the box breaks down.
Just my 2 cents.
we use Linux as OS; also we plan to setup the new "shared" platform on VMWare cloud (internal/private) within the company. the machine costs are very less with cloud; so, i preferred to choose a "separate" box/machine arch style.
Of-course, we plan to get all these arch style options with informatica soon:
ai) separate machine/box for each SDLC phase (only for PC services)
aii) organize non-prod SDLC phases in one box and prod SDLC phase in box ---> for DQ services. the rationale being not many DQ projects really need all phases. Few just profile data even in dev pointing to PROD data sets.
b) separate PC services from DQ services on separate nodes and domain
ci) provide dedicated repository/dedicated integration service/dedicated file system for each customer of the shared platform
cii) provide dedicated folder(s) within a repo/dedicated integration service/dedicated file system for each customer of the shared platform
i am still thinking if I need to make any arch decisions from any other perspective which would impact the "shared" platform setup.
Two points just came to my mind:
It might even make sense to not have certain DEV environments at all. For example, if some users profile data in PROD (and only in PROD), then why should this profiling environment be set up in DEV? After all that would mean that a DEV environment connects to PROD data. Not every customer would agree with such an approach.
From this point of perspective it might make more sense to set up specialised profiling environments where they are needed, not in DEV.
Second it might really make sense to separate IDQ and PowerCenter servers completely, meaning having distinct servers (nodes) for these environments. PowerCenter can handle huge masses of data, but quite often that's not really the case, so a medium-sized PowerCenter server would be sufficient.
On the other hand small profiling environments don't need heavy servers either, but if there are more complex IDQ jobs to perform on millions of rows, then you might need stronger IDQ servers than PowerCenter servers. So the costs to be paid by business teams might differ heavily here.
In addition providing smaller servers e.g. via Amazon WAS or EC2 is easier than procuring a 32-core machine with 512 GB of RAM. So it could also become a matter of procurement time.
From these points of view I second your thoughts of having more flexibility in terms of hardware.