Which of the following include complete integration of information systems across departments in organization?

Data Shadow Systems

Rick Sherman, in Business Intelligence Guidebook, 2015

Abstract

Data shadow systems (sometimes called spreadmarts) are frequently-seen departmental systems (usually spreadsheets) that the more technically inclined members of business groups create for their co-workers to use to gather and analyze data on their own when they do not want to work with IT or cannot wait for them. They might just be one-off reports, or they can be as serious as full-fledged analytical applications used extensively for data integration. Data shadow systems create silos, resulting in inconsistent data across the enterprise. They often frustrate IT groups, who do not understand why departments are not using the “official” IT BI tools. Business people are frustrated because the data shadow system gives them what they want. Resolving the problem of data shadow systems requires a compromise between the business and the IT groups, and not losing the valuable parts of the data shadow systems. The BI team needs to identify them, and either replace them or incorporate them into the overall BI program.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780124114616000162

Architecture of Clinical Computing Systems

Thomas H. Payne, Kent A. Beckton, in Practical Guide to Clinical Computing Systems (Second Edition), 2015

3.5 Best of Breed versus Suite from a Single Vendor

The phrase “best of breed” refers to the practice of acquiring departmental systems from a wide variety of vendors who offer the best system for each department’s needs.3 Because of vendor specialization, this can result in the medical center having products from many vendors. This practice gained favor in the 1990s along with optimism that interfaces between these systems would solve data exchange needs. Most organizations realize that while selecting the best application from the marketplace had clear advantages for improved functionality, this approach created complexity for users, technical, support, and contracting staff. As we will see in the next chapter, interfaces have clear functional and operational drawbacks and significant costs. As “best of breed” has fallen from favor, there has been a resurgence in interest in single-vendor application suites, and compromise with a middle ground in which most applications are from an integrated collection of core systems, with sparing use of specialized department systems.

Another caveat to the distinction between integrated and interfaced architectures is that vendors sometimes achieve “marketing integration,” meaning they have internally combined systems they acquire (by swallowing best-of-breed solutions) and present them as a single, integrated product when in fact these systems are not as fully integrated as they may seem.

In our opinion, good architecture starts with an integrated solution and the organization chooses a non-integrated one only if business demands can only be met with a non-integrated approach.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B978012420217700002X

Creating and Supporting Interfaces

Thomas H. Payne, Kent A. Beckton, in Practical Guide to Clinical Computing Systems (Second Edition), 2015

6 Final thoughts regarding interfaces

Complex and diverse workflow in healthcare delivery results in pressures for computing systems to be developed or tailored to the needs of specialties. The needs of an orthopedic and cardiology practice are different; it is not surprising, therefore, that the two groups see advantages in having a computing system tailored for them. This can result in many different clinical computing systems within the organization, each with its own login and password, list of authorized users, user interface, and—most importantly—its specialty data about a patient’s health. One solution is to make the system difference transparent to clinicians either by exchanging data through an interface or by creating a view in one system that contains data in the departmental system. However, these tasks take time and resources and the growth in the number of specialized systems may exceed the organization’s ability to create new interfaces and views. The result is that introducing the new system may create a simpler workflow and contribute valuable data to the specialist, but the general clinical user will face more complexity: one more place to remember to access, or to ask the department to send data. In the pressure of a busy practice, often dedicating time to search for data in myriad locations is deemed less important than other tasks, and important data are missed. We know that clinicians are accustomed to making decisions with incomplete data.

Vendors who supply clinical computing system to healthcare organizations are generally paid in two ways: licensing fees and maintenance contracts, both applied to software their firm creates and supports. Integrating systems from different vendors so that clinicians can find information easily is almost always the responsibility of the organization itself. Vendors point out that the need for interfaces is reduced if more applications are licensed from them rather than purchased from different vendors, and if an interface is needed, they have created HL7 interfaces with many other vendors. The cost of creating HL7 interfaces is considerable—estimated at $50,000 per interface but higher than this in UW experience, typically requiring a year or more from plan to production use. So the vendor promise of HL7 interfaces solving the problem of dispersion of clinical information is expensive, time-consuming, and often unfulfilled. The majority of the burden falls on the organization and not on the vendor.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780124202177000031

Technical overview

Abby Clobridge, in Building a Digital Repository Program with Limited Resources, 2010

Administrative collections

Administrative departments, specifically athletics and public relations, generate vast amounts of digital images (and increasingly, video). In some aspects, these media files are used in slightly different ways from digital objects in most repository collections. Specifically, media is heavily used for a short period of time early in its lifecycle, but then quickly gets filed away and is rarely if ever used again.

In general, these departments need to maintain a high level of control over their objects. Their work is focused on short-term processing and immediate use of the objects rather than long-term curation and preservation. If one of the purposes of the repository program is to identify, collect, and curate objects that are unique to the institution, these departments are often overlooked. If the repository team does not intervene, these images are usually lost altogether. They end up on discarded computer hard drives, disorganized external hard drives, or on CDs in drawers. In the pre-digital era, boxes of photographs would eventually make their way to the university’s archives. The equivalent process in the digital era is often overlooked and can easily overwhelm an archivist. While there might have been a few hundred photographs taken at a particular event, that number has likely skyrocketed when photographers switched to using digital cameras. Now, that same event could easily generate a few thousand digital images, possibly from several photographers.

Managing digital images and video leads to the crux of the issue. Public relations and athletics departments are often extremely eager to have help managing the large quantity of objects they own. They need tools that allow them to easily manage their existing workflows, which are focused around supporting objects immediately after they are created – not necessarily systems that are designed to support the long-term lifecycle of digital objects.

Traditional digital asset management systems designed for the corporate sector (i.e., those that are focused more towards capturing and providing internal access to digital objects as they are created) are more ideally suited for this type of workflow than a repository system. Systems such as Canto Cumulus serve this niche. Systems that fit this need – that are designed to support internal, departmental workflows and not the dissemination of objects – are more appropriately financed by those departments.

If, however, the library is to play a key role in supporting such a system, it would be more appropriate to look for an alternative, either working in partnership with the IT department to create a homegrown database, using a network-accessible version of Picasa, or setting up a workflow productivity tool such as Adobe Lightroom in a way that can support multiple users.

Even if one of the repository systems is not selected as the primary home for working digital objects for these departments, the repository team should still work closely with these departments, particularly in regards to their metadata. If the metadata and its schema are structured in a useful way, the repository team or the IT department should be able to set up an automated feed from the departmental system directly into the repository. In this example, an athletics department’s database includes the following fields:

sport (basketball, volleyball, football, swimming, diving, etc.);

level (varsity, junior varsity (JV), club, intermural);

gender (men’s, women’s, co-ed);

type of shot (close-up, official group, action, etc.);

photographer;

photo date;

rights;

status (send to repository, under consideration, N/A);

names (name of individuals clearly visible in the image);

description (free-text field for descriptive notes about the photograph);

subjects;

usage (website, calendar, alumni e-mail, etc.);

usage date.

The repository team (and in particular, a metadata librarian) can be of assistance in a number of ways:

Ensuring that the data schema is constructed in a meaningful way, one that will lend itself to the full range of the department’s needs. Since metadata librarians and repository staff are familiar with a wide range of collections, it is likely that they will have some suggestions that might be helpful to the department.

Work with the department to create a data dictionary that accurately reflects the fields and their formats, lists controlled vocabularies, and provides examples.

Work with the department on a regular basis to review data in subject fields to ensure consistency.

Map fields from the department’s production database to the repository. Data from all fields should not automatically be imported into the repository. For instance, usage information, the cost of a particular photo, or what CD a photo was originally stored on probably should not be transferred into the repository.

Set up an automated feed so that records with ‘send to repository’ selected in the status field are routinely copied out of the departmental production database and deposited into the repository system.

Even if the departmental databases are housed in different systems, the repository team should be involved in setting up the database and working to bring selected objects into the repository system. Getting the repository team out and involved in departmental work will further the visibility of the library and this workgroup.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9781843345961500031

Working with Organizational Leadership

Charles Gutteridge, in Practical Guide to Clinical Computing Systems (Second Edition), 2015

4.1 Single System

The most effective healthcare delivery systems cut across what should now be seen as outdated professional and administrative boundaries. The division of care into general practice, primary care, secondary care, complex care, and social care may suit the health professional and systems designed for income flows, but they are counterintuitive for the delivery of effective care for people seeking a single pipeline of support for maintaining health. Health information technology is now in a position to connect care at all points of delivery and to create a real or virtual single system from the perspective of the citizen and patient. An important technical challenge in developing a cutting edge health information technology system is collecting structured data at the point of care. The next overarching issue following data collection is finding ways to distribute that data with purpose across all parts of the system. Interoperable software systems still elude healthcare, but one way to overcome the challenges of interoperability is to implement an enterprise-wide health information technology system. This often engenders fierce conflict with owners of previous systems, and considerable technical challenges for data migration. At hospital and healthcare system level, the adoption of single system technologies has on the whole been slow, while the organizational benefits of enterprise-wide technologies take time to realize and are often in competition with departmental or research systems, which provide immediate local benefits to the clinical user who may not be so driven by whole system benefits. Resolving these conflicts and developing a story of health change is another core skill needed by the modern health information technology leader.

It may be instructive to explore why adoption of point-of-care clinical data recording is generally much more advanced in some primary care environments. In Europe, most general practitioners deliver care through practice level health information technology systems. In the UK, for example, use of such systems is close to 100% by primary care physicians. Historically, the main drivers of use were local ownership and development of such systems by the physicians themselves. Recently, use of the systems is being embraced by care teams to automate primary care pathways in order to improve practice profitability and work patterns, as well as to be able to report on activity and so benefit from quality incentives developed nationally. Similar patterns of health information technology use are widely seen in other European countries. In the United States, which previously lagged behind in primary care take-up of health information technology, the implementation framework achieved by the meaningful use of legislation has now produced rapid uptake of health information technology systems. It is also worth noting that some of the most interesting innovations in primary care delivery have been made in the developing world using cutting-edge software offerings, often in the form of personally held “apps” delivered through mobile phone technologies. While all of these administrative and technical innovations may not always find a niche in other healthcare systems, a set of development rules that are generally applicable can be extracted from the implementation of such systems. (See Box 12.5.)

Box 12.5

Development rules for constraining strategic thinking about single systems

Such systems should:

1.

Provide patient value and offer services that harness established behaviors

2.

Attempt to use existing technologies to reinvent and adapt current delivery

3.

Find ways to borrow assets from different parts of the delivery system

4.

Allow development of new revenue streams by innovative use of data and information

5.

Support the standardized training of healthcare staff

6.

Support a single view of information and virtual integration of services for the user

Globally, health information technology leaders have had to use considerable skill in developing arguments to persuade organizational leaders to adopt enterprise-wide single health information technology systems. While the benefits should be self-evident, there is a relative lack of published benefits realization studies demonstrating how the very sizeable investments both in human and financial terms are amortized over the life cycle of the system. At times of financial pressure, the arguments for new expenditure for service redesign and transformation have to be well marshaled by the new breed of health information technology leader. There is a global need for evidence to support risk-taking in driving innovation and a call for well-structured case reports and publications to help boards make investment decisions in technology. At present the investment is made at government level, which commonly produces distortions in market development of products and frequently slows innovation. There are complex reasons why benefits realization is slow (Box 12.6).

Box 12.6

Why benefits are only slowly realized after implementation of health information technology systems

1.

Slow adoption of solutions at a clinical level

2.

Complex contracting arrangements for large-scale projects

3.

Requirement for long life cycles in implementing clinical change

4.

Political interference in the application of national scale solutions

5.

Developing efficient clinical algorithms requires high quality clinical engagement and leadership

6.

The change barrier for care professions to use health information technology at the bedside is significant

7.

The lack of interoperability solutions across health communities has slowed innovation

8.

Adaptive change at the level of the enterprise-wide system vendors can be slow

There are, however, numerous examples globally where excellent implementation and adoption of single systems across the health community have resulted in both health gain for a population and return on investment for the health system developing use of such systems. The challenge for health information technology and organizational leaders in overcoming local, national, and political barriers to health system transformation using technology are considerable. The mindset required includes a clearly expressed vision of the future possibilities supported by persistence, focus, and courage. A key organizational skill required is that of developing a working partnership with the vendors based on innovation, flexibility, trust, and confidence. Building this relationship is one of the key objectives of senior health information technology staff, while translating the product into meaningful clinical use is a daily task for front-line leaders. Increasingly, as leaders of health systems seek ways of building lifetime value and choice for citizens and patients based on prevention, self-care, and personal autonomy, a growing focus on interoperability between systems is developing.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780124202177000122

Stalking the Competition: How ISA 2004 Stacks Up

Dr.Thomas W. Shinder, Debra Littlejohn Shinder, in Dr. Tom Shinder's Configuring ISA Server 2004, 2005

Comparing ISA 2004 to WatchGuard

According to information provided by International Data Corp. and published by CNET News at //news.com.com/2100-7355-5079045.html, Watchguard was ranked fifth (after Cisco, NetScreen, Nokia, and SonicWall) among security appliance vendors in 2003, with a 4 percent market share.

In this section, we provide an overview of WatchGuard appliances. We look at WatchGuard's general specifications, platform support and system requirements, application layer filtering capabilities, VPN support and Web caching abilities, and examine how ISA Server 2004 stacks up against them.

Watchguard: General Specifications

Watchguard is offering the following appliance models at the time of this writing:

SOHO 6: designed for small businesses and remote offices; provides stateful packet filtering and VPN capability

Firebox X: designed for small to mid-sized enterprises; scalable to grow with the business

Firebox Vclass: designed for medium-sized enterprises; supports high-speed networking and advanced networking features

A comparison of the features among the various WatchGuard appliance models is shown in Table 3.4.

Table 3.4. WatchGuard Model-by-Model Feature Comparison

FeatureFirebox XSOHO 6Firebox Vclass
Firewall throughput Up to 275 Mbps Up to 75 Mbps Up to 2 Gbps
VPN throughput Up to 100Mbps Up to 20Mbps Up to 1.1 Gbps
Concurrent sessions 500,000 7000 500,000
Interfaces 6 10/100 (3 active) 6 10/100 V200, V100: 2 1000BaseSX Fiber Gigabit Ethernet 2 Dedicated HA V80, V60, V60L: 4 10/100 2 Dedicated HA V10:2 10/100
VPN tunnels Up to 1000 Up to 10 Up to 40,000
ALF HTTP, SMTP, FTP, DNS, H.323, DCE-RPC, RTSP HTTP SMTP, HTTP
Spam filtering Optional addition No No
URL filtering Optional Optional No
High availabililty Active/passive No Active Passive Active/active (optional)
QoS No No Yes
VLAN tagging No No Yes
Mobile user VPN licenses Up to 1000 Up to 10 (optional) Up to 20
Network diagnostic tools No No Yes
Command line interface No No Yes
Real time monitoring Yes No Yes
Historical reporting Yes No No
Upgradability To be available March 2004 Upgrade from 10 to 25 or 50 users V60L upgrade to V60

At the time of this writing, typical pricing for various WatchGuard Firebox models is shown in the following list:

SOHO 6 / 10 users: $549
SOHO 6 / 50 users: $899
Firebox III 700/ 250 users: $2490
Firebox III 2500/ 5000 users: $5790
Firebox V10 / unlimited (20/75Mbps): $799
Firebox V60 / unlimited (100/200Mbps): $599
Firebox V80 / unlimited (150/200Mbps): $8490
Firebox V100 / unlimited (300/600Mbps): $14,490

Additional user licenses may be required for SOHO and Firebox V10 (10 users supported out of box). VPN Manager software is required for more than one VPN site with SOHO models:

Four Fireboxes: $796
20 Fireboxes: $2796
Unlimited Fireboxes: $6396

VPN client software cost:

5 user: $220
50 user: $1800

Vclass MU VPN client software cost:

100 user: $780
1000 user: $1440

Centralized Policy Manager (CPM) is used for multiple Vclass appliances. The cost of the CPM for Windows NT/2000 is as follows:

10 appliances: $2840
100 appliances: $12,680

(Watchguard pricing information was gathered from //www.securehq.com/group.wml&storeid=1&deptid=76&groupid=222&sessionid=200437249417233)

WatchGuard: Platform Support and System Requirements

The Watchguard appliances run a proprietary operating system and firewall software (Security Management System) that can be configured in three ways:

InternetGuard: protects corporate networks and bastion hosts and defines corporate-level security.

GroupGuard: protects departmental systems, restricts flow of information and packets, and defines Internet privileges at the group level.

HostGuard: protects specific servers.

How does ISA Server 2004 compare? ISA Server 2004 runs on standard Intel PCs that are easily upgraded and can be installed on Windows 2000 Server or Windows Server 2003, providing a standardized, familiar management interface and the flexibility to use hardware of your choice. This makes ISA Server more scalable than ASIC appliances that are tied to the hardware and more user-friendly than appliance-based firewalls.

The Windows Server 2003 OS can be “hardened” by applying a series of special profiles included in Server 2003 SP2 for the Security Configuration wizard. Microsoft also provides a system hardening guide that includes specific configuration recommendations and deployment strategies for ISA Server 2004. The document can be downloaded at //www.microsoft.com/technet/prodtechnol/isa/2004/plan/securityhardeningguide.mspx.

WatchGuard: Application Layer Filtering Capabilities

Watchguard Fireboxes (except the lower cost models – SOHO and V10) support application proxies to block common application-layer attacks. You can set protocol rules for HTTP, FTP and SMTP. Firebox III models 500, 700, 1000, 2500, and 4500, and Firebox Vclass models V60L, V60, V80, V100, and V200 support the following proxies:

SMTP: inspects content of ingoing and outgoing e-mail; denies executable attachments, filters by address, filters malformed headers, spoofed domain names and message IDs, specifies maximum number of message recipients and maximum message size, allows specific characters in e-mail addresses.

blocks Web traffic on ports other than 80, filters MIME content, Java, ActiveX, removes unknown headers, removes cookies, filters content to comply with use policies.

FTP: Filters FTP server commands, uses read-only rules to control file changes, sets time limits for idle connections.

DNS: Checks for malformed headers and packets, filters header content for class, type, and length abnormalities.

H.323: Limits open ports.

The Vclass firewalls provide built-in intrusion detection, with configurable logs and alarms for the following attacks:

Java script blocking

IP source route

Denial of service (DoS)

Distributed denial of service (DDoS)

Ping of Death

ICMP flood

TCP SYN flood

UDP flood

Automatic logs are embedded in the ASIC to detect the following attacks:

LAND

Teardrop

NewTear

OpenTear

Overdrop

Jolt2

SSPING

Bonk/Boink

Smurf

Twinge

How does ISA Server 2004 compare? ISA Server 2004's intrusion detection mechanism can detect the following types of attacks:

Windows out-of-band (WinNuke)

Land

Ping of Death

IP half scan

UDP bomb

Port scan

DNS host name overflow

DNS length overflow

DNS zone transfer

POP3 buffer overflow

SMTP buffer overflow

ISA Server includes deep application layer filtering at no extra cost. ISA Server 2004 performs intelligent stateful inspection using “smart” application filters. Not only can you determine the validity of data moving through the firewall in request and response headers, you can also filter by “signature” (text string) for keyword filtering or filter for particular file types. ISA 2004 supports Websense and other third-party filtering products and services.

ISA Server 2004 inspects all aspects of HTTP communications. The SMTP filter protects against invalid SMTP commands that cause buffer overflows, and the SMTP message screener blocks spam and mail containing dangerous attachments.

ISA Server's RPC filtering protects against exploits and malicious code directed to the RPC services and ensures that only valid connections get through to the Exchange server.

ISA Server's DNS filtering prevents application layer attacks aimed at published DNS servers, and the POP3 filters protect published POP3 mail servers from attack.

WatchGuard: VPN Support

The number of VPN tunnels and VPN throughput for WatchGuard Fireboxes varies widely depending on the model. The lower cost appliances (SOHO, Firebox III 700, Firebox V10) support a low number or no VPN clients. VPN support for various models is shown in Table 3.5.

Table 3.5. WatchGuard Model-by-Model VPN Support Comparison

ModelVPN throughputMax VPN clientsFree VPN Clients includedVPN sites
SOHO 6 20 Mbps 5 0 1/5
Firebox III 700 5 Mbps 150 0 1000
Firebox III 2500 75 Mbps 1000 50 1000
Firebox V10 20 Mbps 0 0 10
Firebox V60 100 Mbps 400* 20 400*
Firebox V80 150 Mbsp 8000* 20 8000*
Firebox V100 300 Mbps 20,000* 20 20,000*

*Total client plus site connections

Firebox V80, WatchGuard's enterprise level firewall, supports the following VPN protocols:

IPSec with IKE

L2TP over IPSec for external L2TP servers

PPTP over IPSec for external PPTP servers

IPSec Security Services

Tunnel and Transport Mode

ESP (Encapsulated Security Payload)

AH (Authentication Header)

AH + ESP

IPSec Encryption and Authentication

DES and 3DES

MD5 and SHA-1

RSA

Digital Signature Standard (DSS)

Certificate Management

Automatic Certificate Revocation List (CRL) through LDAP Server

Digital Certificates X.509 v2 and v3, PKCS #10, and PKCS #7

Watchguard Fireboxes require a proprietary Mobile User VPN client, which must be distributed, along with security configuration policy, to each client machine. The VPN client includes personal firewall software for the client computer.

How does ISA Server 2004 compare? ISA Server 2004's VPN wizards make it easy to set up VPNs. ISA Server supports the use of the Connection Manager Administration Kit (CMAK) to create VPN connectoids that allow users to connect to the VPN server with one click, and supports an automatically downloadable phone book. CMAK also allows you to customize routes for VPN clients. CMAK wizards make it easier for the administrator as well as the user.

ISA Server uses IETF RFC Internet standard L2TP IPSec Nat Traversal (NAT-T) protocol to connect to Server 2003 VPNs. ISA Server 2004 supports DES, 3DES and AES encryption.

ISA Server 2004 supports both remote access and site-to-site VPNs. ISA Server can apply firewall policy to the VPN interfaces.

ISA Server 2004 supports both Microsoft PPTP and L2TP clients. ISA Server does not require any software to be added to VPN clients. ISA Server supports the PPTP and L2TP/IPSec VPN clients that are built into Windows 9x/ME, Windows XP, Windows NT, 2000, and Server 2003 operating systems.

ISA Server's VPN quarantine allows administrators to enforce specific conditions VPN clients must meet before being allowed to connect (for example, latest service pack/updates must be installed, antivirus and personal software must be installed and operational) and direct clients to server to download and install the required updates. This goes further than Watchguard's Mobile User VPN client, which enforces use and update of firewall software.

WatchGuard: Web Caching

Watchguard appliances do not include Web caching functionality. Web caching/acceleration can be added to a network using Watchguard products by implementing a caching solution such as ISA Server on the network.

How does ISA Server 2004 compare? ISA Server 2004 includes Web caching functionality at no extra charge. Forward caching allows the ISA Server 2004 firewall to cache objects retrieved by internal users from external Web servers. Reverse caching allows the ISA Server 2004 firewall to cache objects retrieved by remote users from servers that have been published by the ISA Server 2004 firewall. Web objects requested by remote users are cached on the ISA Server 2004 firewall, and subsequent requests for the same objects are served from the firewall's Web cache instead of forwarding the request to the published Web server located behind the ISA Server 2004 firewall.

Fast RAM caching allows the ISA Server 2004 firewall to keep most frequently accessed items in memory. This optimizes response time by retrieving items from memory rather than from disk. ISA Server 2004 gives you an optimized disk cache store that minimizes disk access for both read and write operations. ISA Server 2004 also supports Web proxy chaining, which allows the ISA Server 2004 firewall to forward Web requests to an upstream Web proxy server.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9781931836197500101

What Does an Application Administrator Do?

Kelly C. Bourne, in Application Administrators Handbook, 2014

1.1 Overview of the Position

Application Administrators aren’t developers and they’re not users, but they are critical to keeping the applications your organization relies on running. They install, update, tune, diagnose, and babysit both internal and third-party applications. The applications they support can include ERP (Enterprise Resource Planning), CRM (Customer Relationship Management), POS (Point of Sale), BPM (Business Process Management), budgeting and forecasting, HR (Human Resources), legal matter management, AP (Accounts Payable)/AR (Accounts Receivable), payroll, general ledger, SOX (Sarbanes Oxley) compliance tracking, training, time tracking, supply chain, database engines, and messaging, i.e., e-mail.

While software can be readily licensed from a vendor, it still requires a significant amount of effort on the part of the acquiring organization. Someone has to prepare the servers that it will run on. Then someone has to install it, configure it, load data into it, tune it, upgrade it, and generally keep the package up and running. If errors occur, someone has to report them to the vendor and work with vendor technicians to correct the problems. These are all tasks that an Application Administrator handles.

In many cases, corporations are absolutely dependent that these applications be kept running. What would be the response of employees if the payroll application broke down? What would happen to the organization’s financial situation if invoices weren’t sent out to customers? What if new employees couldn’t be added to the HR system? The importance of Application Administrators and their level of expertise shouldn’t be overlooked. Since the trend of relying upon third-party software isn’t going to decrease in the foreseeable future, the role of Application Administrator won’t be going away either.

Every company employs them even if their official job title doesn’t sound at all like “Application Administrator.” A job title of “system application administrator” might be for a position that covers both application administration and systems administration. Since there is a significant degree of overlap between these two positions, this isn’t uncommon.

Any software the organization relies upon is almost certain to have an Application Administrator supporting it. This includes software acquired from a third-party vendor or from an internal development team. Development teams typically develop the application and then hand support responsibilities off to another group within the organization. For better or for worse, they don’t tend to stick around indefinitely to provide ongoing production support.

1.1.1 Application administrator backgrounds

The background of IT professionals working as Application Administrators varies widely. Some have a background in software development. Others became Application Administrators because an administrator was needed and they were in the right place at the right time. Individuals without formal education or training in IT will benefit the most from this book. It will provide hands on advice on how to administer applications, troubleshoot them, and establish best practices for keeping applications running smoothly. But even the most experienced Application Administrator has weak areas that this book can help shore up.

1.1.2 Potential skillset

The list of potential skills that an Application Administrator might be required to have can be long and diverse. The skills that are being sought range from very specific technical skills to skills that are considered “softer.” Virtually every posting requires some variation of excellent communication skills, troubleshooting ability, problem solving and/or analytical skills, flexibility, and understanding business needs. Some examples of requested skills are:

Expertise and experience in XYZ application is a must.

Strong experience on failover, high availability, disaster recovery, business continuance.

Strong experience in XYZ version control tool.

Good knowledge and demonstrated troubleshooting abilities on connectivity issues due to firewall, load balancer, proxy, and others.

Experience with SOX compliance and methodologies.

Hands on experience in process automation, best practice approach, technology efficiency, and effectiveness.

Knowledge of Web Services and Services Oriented Architecture is desirable.

Requires extensive knowledge of Windows 2000/2003 Server.

Should be experienced with SQL Query Development as it relates to XYZ databases.

Must demonstrate strong experience in designing, implementing, and maintaining current Windows server products including Microsoft SQL 2005, IIS, Windows Clustering, Network Load Balance, Net Environments, and ISA.

Strong Linux experience including shell and Perl scripting for administration tasks.

Experience with monitoring tools is a plus.

Knowledge of Oracle Application Server, Apache Tomcat, and Microsoft IIS a plus.

Excels at the highest technical level of all phases of applications systems analysis and programming activities.

Understands software and hardware requirements of varied departmental systems.

Understands the workflow and process requirements of complex application systems.

Demonstrated ability to be the subject matter expert in supporting, maintaining, and administering complex applications.

Excellent problem solving/analytical skills and knowledge of analytical tools.

Display and execute logical and complex troubleshooting methods.

Excellent verbal, written communication, and negotiations skills.

Demonstrated soft skills required such as presentation of ideas and clearly articulate the concepts to senior management.

Ability to effectively interface with technical and nontechnical staff at all organizational levels.

Strong customer services and problem solving skills.

Ability to provide outstanding customer service, be a good listener and work well with others.

Self-motivated, able to work independently, and takes initiative.

Ability to multitask in a fast-paced environment.

Outstanding attention to detail with superior time and project management skills.

Demonstrated ability to work successfully with a diverse group of customers.

Ability to learn new content areas and new skills quickly and well required.

Professional attitude and work habits.

Understands business function related to the application.

Ability to work through ambiguous work situations.

1.1.3 Duties and responsibilities

The list of duties and responsibilities described in some job postings is as broad and diverse as the technical skills that are required of prospective job applicants. It wouldn’t be realistic to expect a single candidate to be responsible for this entire list of duties, but don’t be surprised if your initial job description gets widened to include more and more responsibilities as time goes by. Some of the duties and responsibilities that an Application Administrator might be given include:

The candidate shall monitor the XYZ software application, document and analyze problems, and publish maintenance schedule

Sets up administrator and service accounts

Maintains system documentation

Interacts with users and evaluates vendor products

May program in an administrative language

Provides advice and training to end-users

Maintains current knowledge of relevant technologies as assigned

The candidate shall serve as part of a team responsible to maintain an XYZ system availability rate of 99%

Troubleshoot, and resolve any reported problems

Provide application performance tuning

The candidate shall review the governing regulations to ensure proper program support

The candidate shall monitor, update, and maintain existing legacy environment software systems interfaces to ensure that the interfaces exchange data properly and to support the current legacy environment

This is a hands on senior technical position with Subject Matter Expertise (SME) on XYZ app

Enable best practices

Process automation

Maintain SLA, System Availability, Capacity management, and Performance KPI

Collaborate with hardware, OS, DBA technical teams to ensure proper integration of the environment

Work closely with application development teams and vendors to tune and troubleshoot applications

Plan and coordinate testing changes, upgrades, and new services, ensuring systems will operate correctly in current and future environments

Provides second level of technical support for all corporate systems and software components

Provide Level 3 support for the application. Must be able to support 24 × 7. Also enable production support team to tackle Level 2 support and issues

Leads and participates in efforts to develop and implement processes for application and system monitoring

Leads and participates in efforts to implement application updates to include upgrades, patches, and new releases

Tests, debugs, implements, and documents programs. Assists in the modification of company products and/or customer/internal systems to meet the needs of the client and/or end-user

Develops test plans to verify logic of new or modified programs

Develop and maintain the reporting and dashboard infrastructure for the organization

Develop work plans and track/report status of assigned projects/tasks

Liaise with vendor support on all issues

Fully responsible for problem management activities such as issue resolution and root cause analysis

Daily monitoring and maintenance activities

Assist in the day-to-day operations of Operations department

Reviews and addresses assigned technical support tickets and calls, enters all updates related to such calls into the Help Desk ticketing system, and keeps team aware of any sensitive or escalating issues

Provides subject matter expertise for all applications

Participate in security and application audits

Occasionally supporting off-hours activities. This position may require a flexible schedule

Promote changes through the use of XYZ adhering to SOX policies and procedures

Identify, download and apply XYZ upgrades and patches

Research issues with application middleware, database, etc., and recommend/apply solutions such as configuration changes to O/S, WebLogic, Tuxedo, Java, etc., additional hardware, memory, CPUs, etc.

Identify problematic SQL and work with developers, analysts, and DBAs to resolve

Optimize and tune the XYZ application components

Work with customers and analysts to develop scripts used to perform load testing

Use load testing tool to perform tests to determine application load capabilities

As the above list makes painfully obvious, the demands put upon an Application Administrator are diverse and plentiful. It’s an interesting job. It’s a challenging job. It’s certainly not a boring job. Every day will bring new challenges. Every problem is a learning opportunity. Every solution is an opportunity to educate your users, other professionals in the organization, or your successors.

1.1.4 Types of applications that need an administrator

Applications that are licensed from a third-party vendor and weren’t custom built for an organization are frequently referred to by the acronym COTS—Commercial Off The Shelf Software. Because there are so many installations of COTS applications, they are primarily what Application Administrators support.

In addition to COTS packages, Application Administrators also work to administer Software as a Service (SaaS) applications. SaaS applications are hosted by the vendor. The client’s users access it via a web browser directed to a specific URL. If the SaaS application is critical to your organization, then someone will need to function as an administrator to help users, work with the vendor when problems occur and act as an intermediary between your organization and the vendor’s technical staff.

More and more enterprise-level software used worldwide is licensed from third-party vendors instead of being developed internally. This trend isn’t likely to change in the future. If anything, it’s likely to accelerate. Reasons for this are numerous and include the following:

Developing a complex application is both extremely difficult and very expensive. It takes time, skilled individuals, and significant resources to develop effective, reliable software. Most organizations lack the experience to do it properly. The failure rate of large-scale software development projects is appalling high.

Organizations like to focus on their primary business function. For most businesses, software development isn’t their core function. Developing an ERP application or any other complex application reduces their ability to focus on running the business.

Enterprise-level software is complex and becoming more so. Many, if not most, organizations extend across state or international boundaries. This requires that the software be capable of handling the laws and regulations of every state and country that it operates in. Laws and regulations tend to be extremely dynamic. It’s very time consuming to modify and test software to properly handle this flood of changes.

Software applications have to deal with dynamic environments. New versions of operating systems become available and existing ones are retired. Modifying and testing an application to deal with a new operating system is a significant commitment. New web browsers and updated versions of existing ones are released on a regular basis. Applications that rely on a web browser need to be tested, and possibly modified, for each new browser upgrade. Database systems undergo regular upgrades and modifications. Applications that rely upon a database system need to be tested and possibly modified to deal with changes that occur within the database package.

Security vulnerabilities are a constant threat for all software, especially ones that are widely deployed and deal with confidential or personally identifiable information (PII). The dangers to an organization’s reputation and the costs of a breach are shockingly high. Staying knowledgeable about newly discovered security threats requires the focus of skilled professionals.

Due to the above points, more and more organizations are choosing to “outsource” development of software applications to specialized vendors. Acquiring a third-party application is definitely a compromise situation. None of the existing packages is likely to provide the exact features that an organization wants or needs. On the other hand, the cost and time to install an off-the-shelf application are significantly less than what it would take to develop the application internally or have it custom built by a third party.

The primary exceptions to the trend of licensing applications are when the application is “core” to the business and provides a competitive advantage. For example, Google is never going to license software from a vendor to replace its Page Ranking algorithms. Those algorithms are the heart and soul of Google and will always be kept in-house. Contrast this with an organization’s payroll application. There is nothing unique or advantageous about payroll. Certainly, it’s an important process, but it doesn’t rise to the level of being a trade secret. It wouldn’t make sense for an organization to spend millions of dollars to develop proprietary algorithms to cut paychecks. There wouldn’t be any significant payback from such an investment.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780123985453000017

Server Classifications

Shu Zhang, Ming Wang, in Encyclopedia of Information Systems, 2003

I. Servers

A server is a networked computer that serves the requests from multiple computers. Servers play very important roles in information systems, especially in distributed computing environments and client/server architectures. A common example would be a database server serving many users over a network simultaneously. In this case, users might use a desktop PC (personal computer) with a graphical display to com pose and send a request, and receive and display the result of the request from the server. Since a server might serve hundreds or even thousands of users at the same time, it needs a more powerful CPU (central processing unit), possibly multiple CPUs, and faster and redundant data storage devices, like a RAID (redundant array of inexpensive disks) device. Because a server might store shared critical data, usually it has a tape drive, or it can connect to a networked tape drive device to archive data for ensuring data safety and integrity.

I.A. From Hosts to Servers

I.A.1. Centralized System

The concept of network computer architecture is evolutionary. The origin of the centralized computer dates back to the 1940s. During that time host computers were very large and expensive machines, like the famous MARK I, ENIAC, and EDVAC, etc. Even after computers were commercialized around the 1950s, most computers were “hosted” in highly secured data processing centers, and users accessed the host computers through “dumb” terminals. By the late 1960s, IBM became a dominant vendor of large-scale computers called a “mainframe” host. In the mid-1970s, minicomputers started challenging mainframe computers. In many cases, minicomputers could host applications and perform the same functions as mainframes, but with less cost. Since the host is the center of this system architecture, it is called a centralized system. In the early 1980s, most computers, no matter whether they were large-scale mainframes or smaller minicomputers, were operated as application hosts, while terminal users had limited access to their hosts; in fact, most terminal users never have any physical access to host computers. Generally speaking, a host is a computer designed for massive parallel processing of large quantities of information connected with terminals utilized by end users. All network services, application executions, and database requests are hosted in this computer, and all data are stored in this host.

Basically, minicomputers and mainframe computers were the de facto standard of enterprise centralized computing systems before PCs entered the professional computing area. Figure 1 shows a typical centralized system.

Figure 1. Centralized system.

I.A.2. Distributed System

A distributed system consists of a collection of microcomputers connected to one or more computer servers by a computer network and equipped with network operating system software. Network operating system software enables computers to coordinate their activities and to share the resources of the system—hardware, software, and data. It also can coordinate activities among servers to achieve better over all performance for network tasks. A well-designed distributed system could provide users with a single, integrated computing environment even though the computers are located in geographically separated facilities.

The development of distributed systems followed the emergence of high-speed LAN (local area computer networks) and WAN (wide area networks) in the early 1980s. Ever since IBM introduced IBM PC into the computing market, the enterprise computing system has changed dramatically, as computers have become more and more affordable to users. The availability of high performance microcomputers, work stations, and server computers has resulted in a major shift towards distributed systems and away from centralized computing systems. People are no longer being tied to high-end and expensive centralized computing environments. The trend has been accelerated by the development of distributed system software such as Oracle Server and SQL server packages, designed to support the development of distributed applications. It is very common now to see a distributed application running collaboratively among some servers. For a well-designed distributed application, any task could be executed by more that one server, so a single faulty server won't bring down the applications. For example, a state wide hospital system's IS (information system) department might have to support tens of medical centers scattered state wide and have tens of departmental information systems (like a UNIX based RIS for Radiology and a Windows NT based Dietary system, etc.) running on Windows 2000/NT servers and different UNIX platforms, while the HIS (Hospital Information System) and CIS (Clinical Information System) are on mainframe computers. Only a distributed computing environment could bring so many autonomous departmental systems together and make them work collaboratively.

Enterprise computing systems differ in significant ways from centralized and distributed systems. Since the late 1980s, a move has occurred from mainframe systems to networked personal computing systems, with network software providing such functionality as shared data storage and electronic mail. Despite the interconnectivity of distributed systems, they remain largely independent: each user runs his or her applications on their own microcomputers, and any interactions between systems are through shared files and mails. Client-server architecture has become common in such a distributed system. A number of client computers are configured as a sort of ring around the server, which provides database functionality and file management. Again, the client computers interact indirectly through shared servers. Figure 2 shows a typical distributed system.

Figure 2. Distributed system.

I.B. Servers for Client/Server Computing

Client/server computing is a phenomenon that has developed in the past decade. The inexpensive and powerful PC took over previously “dumb” terminal oriented enterprise desktops as quickly as people could think. To use the excessive computing capacity of desktop PCs or workstations, many organizations began downloading data from those enterprise host computers for local manipulation at the user's fingertips. In this client/server model, the definition of the server will continue to include what those traditional hosts and servers have, but people can envision the placement of network and application services on many different operating system platforms.

I.B.1. Servers for “thin” Clients
I.B.1.a. “Fat” Clients versus “Thin” Clients

Client-server computing architecture refers to the way in which software components interact to form a system. As the name suggests there is a client process, which requires some resource, and a server, which provides the resource. There is no requirement that the client and server must reside on the same machine. In practice, it is quite common to place a server at one site in a local area network and the clients at the other sites. Clients can be categorized into two types: fat clients and thin clients. A “fat” client requires considerate resources on the client's computer to run effectively. This includes disk space, RAM, and CPU power. It has significant client-side administration overhead. A “thin” client requires fewer resources on the client's computer and is responsible for only simple logic processing, such as input validation. It has less expensive hardware because the client is thin.

I.B.1.b. Thin Clients

In the thin-client/server computing model, applications execute 100% on the server. The client computers are just ordinary desktop PCs running one or more terminal programs to access servers over LAN or WAN. The thin-client/server involves connecting thin-client software or a thin-client hardware device with the server side using a highly efficient network protocol. The thin-client/server architecture enables 100% server-based processing, management, deployment, and support for mission-critical productivity, Web-based, or other custom applications across any type of connection to any type of client hardware, regardless of platforms. The client hardware can include desktop PCs, network computers, handhold computers, wireless PDA, and Windows-CE devices.

I.B.1.c. Advantages Of “thin” Clients

Though it appears to be a very primitive approach for client/server computing since it simply replaces one or more dumb terminals with a desktop PC, the thin-client/server model has regained some ground recently because of the TCO (total cost of ownership) consideration for IS operation and the appearance of lower-powered client devices like a PDA (personal data assistant) for palmtop computing. For the thin-client/server computing model, there is no need to purchase or upgrade client hardware—just run the latest software on servers instead. The client can let it comfortably evolve, leveraging existing hardware, operating systems, software, networks, and standards. Thin-client/server computing extends the life of the organization's existing computing infrastructure considerably and might reduce TCO if it is planned and implemented carefully with well-scaled servers.

I.B.2. Servers for Multiple Tiers of Client/Server

In a typical client/server based application, the client process and server process can be on the same computer, or distributed in two or more computers. A single-tier client/server application consists of a single layer that supports the user interface, the business rules, and the data manipulation processes all on one computer. This kind of application is rarely used today because it will not take advantage of the distributed computing environment.

I.B.2.a. Two-Tier Client/Server Architecture

The two-tier client/server structure is the simplest client/server structure that is still in use for many applications today. In a two-tier application, the business rules and user interface remain as part of the client application on the client's computers. The traditional two-tier client/server architecture provides a basic separation of tasks. The client (tier 1) is primarily responsible for the presentation of data to the user, and the server (tier 2) is primarily responsible for supplying data services to the client. The client handles user interface actions and the main business application logic. The server provides server side validation, data retrieval, and data manipulation. This separate application could be a RDBMS (relational database management system), which is functioning as a data storage/retrieval system for the application.

I.B.2.b. Three-tier Client/server Architecture

The need for enterprise scalability challenged the traditional two-tier client/server architecture. In the mid-1990s, as applications became more complex and potentially could be deployed to hundreds or thousands of end-users, the client side presented the problems that prevented true scalability. Because two-tier client/server applications are not optimized for WAN connections, response time is often unacceptable for remote users. Application upgrades require software and often hardware upgrades on all client PCs, resulting in potential version control problems.

By 1995, three new layers of client/server architecture were proposed, each running on a different platform:

1.

Tier one is the user interface layer, which runs on the end-user's computer.

2.

Tier two is the business logic and data processing layer. This middle tier runs on a server and is often called the application server. This added middle layer is called an application server.

3.

Tier three is the data storage system, which stores the data required by the middle tier. This tier may run on a separate server called the database server. This third layer is called the back-end server.

In a three-tier application, the user interface processes remain on the client's computers, but the business rules processes are resided and executed on the application middle layer between the client's computer and the computer which hosts the data storage/ retrieval system. One application server is designed to serve multiple clients. In this type of application, the client would never access the data storage system directly.

I.B.2.c. Advantages Of Three-tier Client/server Architecture

Since there are three physically separated layers for the application, the added modularity makes it easier to modify or replace one tier without affecting the other tiers. Application maintenance is centralized with the transfer of the business logic for many end-users to a single application server. This eliminates the concerns of software distribution that are problematic in the traditional two-tier client/server architecture. An additional advantage is that the three-tier architecture maps quite naturally to the Web environment, with a Web browser acting as the thin client, a Web server acting as the application server, and a data/database system server as the back-end.

I.B.2.d. Multitier Client/server Architecture

The three-tier architecture can be extended to n-tiers, with additional tiers added to provide more flexibility and scalability. Some distributed computing systems have more than three layers, but the basic rules are the same as those for three-tier applications. For example, the middle tier of the three-tier architecture could be split into two, with one tier for the Web server and another for the application server. More than one server used in the second and third layers will usually increase overall application efficiency as needed. Figure 3 shows a typical multiple tier client/server architecture.

Figure 3. A multitier client/server diagram.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B012227240400157X

Definition, structure, content, use and impacts of electronic health records: A review of the research literature

Kristiina Häyrinen, ... Pirkko Nykänen, in International Journal of Medical Informatics, 2008

The concept of EHR covers a wide range of different information systems from departmental systems to comprehensive electronic health care records. Various kinds of departmental EHRs such as intensive care records, emergency department records or ambulatory records have now been in use for a long time, but hospital-wide EHRs, primary care or personal health records are less common. A patient-centred electronic health care record was introduced in only one study, and personal health records in eight studies. Interestingly, the definition of EHR does not include nursing information systems or computerized instruments; however, descriptions of these systems or instruments were provided in the articles.

Read full article

URL: //www.sciencedirect.com/science/article/pii/S1386505607001682

Which of the following is responsible for designing and developing information systems?

A systems analyst is an information technology (IT) professional who specializes in analyzing, designing and implementing information systems.

What type of system tracks inventory and related business processes across departments and companies?

ERP systems unify critical business functions like finance, manufacturing, inventory and order management, customer communication, sales and marketing, project management and human resources. One major feature is detailed analytics and reporting on each department.

Are systems that a company can use to capture and share information throughout the organization?

A management information system (MIS) is a computerized database of financial information organized and programmed in such a way that it produces regular reports on operations for every level of management in a company. It is usually also possible to obtain special reports from the system easily.

Which of the following types of systems would you use to manage relationships with your customers?

Customer relationship management (CRM) is a technology for managing all your company's relationships and interactions with customers and potential customers.

Toplist

Neuester Beitrag

Stichworte