Author: Admin

  • Security Information and Event Management Systems (SIEMS)

    Security Information and Event Management Systems (SIEMS)

    Security Information and Event Management Systems (SIEMS) automate incident identification and resolution based on built-in business rules to help improve compliance and alert staff to critical intrusions. IT audits, standards, and regulatory requirements have now become an important part of most enterprises’ day-to-day responsibilities. As part of that burden, organizations are spending significant time and energy scrutinizing their security and event logs to track; which systems have existed accessed, by whom, what activity took place, and whether it was appropriate.

    Here is the article to explain, Essay of the Security Information and Event Management Systems (SIEMS)!

    Organizations are increasingly looking towards data-driven automation to help ease the burden. As a result, the SIEM has taken form and has provided focused solutions to the problem. The security information and event management systems market is driven by an extremely increasing need for customers to meet compliance requirements as well as the continued need for real-time awareness of external and internal threats. Customers need to analyze security event data in real-time (for threat management) and to analyze and report on log data and primarily this has made the security information and event management systems market more demanding. The market remains fragmented, with no dominant vendor.

    This report entitled ‘Security Information and Event Management Systems (SIEMS) Solutions’ gives a clear view of the SIEM solutions and whether; they can help to improve intrusion detection and response. Following this introduction is the background section; which deeply analyzes the evolution of the SIEM, its architecture, its relationship with log management, and the need for SIEM products. In the analysis section, I have analyzed the SIEM functions in detail along with real-world examples. Finally, the conclusion section summarizes the paper.

    What is the Meaning and Definition of SIEMS?

    Security Information and Event Management Systems solutions are a combination of two different products namely, SIM (security information management) and SEM (security event management). SIEMS also like to know as Network Intrusion Detection Systems (NIDS); SIEM technology provides real-time analysis of security alerts generated by network hardware and applications. The objective of SIEM is to help companies respond to attacks faster and to organize mountains of log data. SIEM solutions come as software, appliances, or managed services. Increasingly, SIEM solutions stand existing used to log security data and generate reports for compliance purposes. Though Security Information Event Management and log management tools have been complementary for years, the technologies that exist expect to merge.

    Evolution of SIEM:

    SIEM emerged as companies found themselves spending a lot of money on intrusion detection/prevention systems (IDS/IPS). These systems helped detect external attacks, but because of the reliance on signature-based engines, a large number of false positives stood generated. The first-generation SIEM technology existed designed to reduce this signal-to-noise ratio and helped to capture the most critical external threats. Using rule-based correlation, SIEM helped IT detect real attacks by focusing on a subset of firewall and IDS/IPS events that violated policy.

    Traditionally, SIEM solutions have been expensive and time-intensive to maintain and tweak, but they solve the big headache of sorting through excessive false alerts and they effectively protect companies from external threats. While that was a step in the right direction, the world got more complicated when new regulations such as the Sarbanes-Oxley Act and the Payment Card Industry Data Security Standard followed much stricter internal IT controls and assessment. To satisfy these requirements, organizations exist required to collect, analyze, report on, and archive all logs to monitor activities inside their IT infrastructures.

    The idea is not only to detect external threats but also to provide periodic reports of user activities and create forensics reports surrounding a given incident. Though SIEM technologies collect logs, the process only a subset of data related to security breaches. They weren’t designed to handle the sheer volume of log data generated from all IT components; such as applications, switches, routers, databases, firewalls, operating systems, IDS/IPS, and Web proxies.

    Other evolutions;

    With an idea to monitor user activities rather than external threats, log management entered the market as a technology with architecture to handle much larger volumes of data and with the ability to extend to meet the demands of the largest enterprises. Companies implement log management and SIEM solutions to satisfy different business requirements, and they have also found out that the two technologies work well together. Log management tools exist designed to collect reports and archive a large volume and breadth of log data, whereas SIEM solutions stand designed to correlate a subset of log data to point out the most critical security events.

    On looking at an enterprise IT arsenal, it is likely to see both log management and SIEM. Log management tools often assume the role of a log data warehouse that filters and forwards the necessary log data to SIEM solutions for correlation. This combination helps in optimizing the return on investment while also reducing the cost of implementing SIEM. In these tough economic times, it is likely to see IT trying to stretch its logging technologies to solve even more problems. It will expect its log management and SIEM technologies to work closer together and reduce overlapping functionalities.

    Relation between SIEM and log management:

    Like many things in the IT industry, there’s a lot of market positioning and buzz coming around regarding how the original term of SIM (Security Information Management), the subsequent marketing term SEM (Security Event Management), the newer combined term of SIEMS (Security Information and Event Management Systems) relate to the long-standing process of log management. The basics of log management are not new. Operating systems, devices, and applications all generate logs of some sort that contain system-specific events and notifications. The information in logs may vary in overall usefulness, but before one can derive much value

    out of them, they first need to enable, then transported, and eventually stored. Therefore the way that one does gather this data from an often distributed range of systems; and get it into a centralized (or at least semi-centralized) location is the first challenge of log management that counts. There are varying techniques to accomplish centralization, ranging from standardizing on the Syslog mechanism; and then deploying centralized Syslog servers, to using commercial products to address the log data acquisition, transport, and storage issues.

    Other issues;

    Some of the other issues in log management include working around network bottlenecks, establishing reliable event transport (such as Syslog over UDP), setting requirements around encryption, and managing the raw data storage issues. So the first steps in this process are figuring out what type of log and event information is in need to gather, how to transport it, and where to store it. But that leads to another major consideration about what should one person want to do with all those data. It is at this point where the basic log management ends and the higher-level functions associated with SIEM begin.

    SIEM products typically provide many of the features that remain essential for log management; but add event-reduction, alerting, and real-time analysis capabilities. They provide the layer of technology that allows one to say with confidence that not only are logs existing gathered but they are also living reviewed. SIEM also allows for the importation of data that isn’t necessarily event-driven (such as vulnerability scanning reports) and it knows as the “Information” portion of SIEM.

    SIEM architecture:

    Long-term log management and forensic queries need a database built for capacity, with file management and compression tools. Short-term threat analysis and correlation need real-time data, CPU, and RAM. The solution for this is as follows:

    • Split the feeds into two concurrent engines.
    • Optimize one for real-time and storage up to 30 days of data. (100-300GB)
    • Optimize the second for log compression, retention, and query functions. (1TB+)

    The block diagram showing the architecture of the SIEM is as follows:

    A collector is a process that gathers data. Collectors exist produced in many shapes and sizes from agents that run on the monitored device, to centralized logging devices with pre-processors to split stream the data. These can be simple REGEX file parsing applications, or complex agents for OPSEC, LEA, Net/WMI, SDEE/RDEP, or ODBC/SQL queries. Not all security devices are kind enough to forward data, and multiple input methods, including active pull capabilities, are very essential. Also, since SYSLOG data do not encrypt, it may need a collector to provide encrypted transport.

    Analysis engine;

    A threat analysis engine will need to run in real-time, continuously processing and correlating events of interest passed to it by the collector, and reporting to a console or presentation layer application about the threats found. Typically reporting events that have happened for 30 days is sufficient for operational considerations. A log manager will need to store a great deal of data, and may take either raw logs or filtered events of interest, and need to compress store, and index the data for long-term forensic analysis and compliance reporting. Capacity for 18 months or more of data is likely to require.

    Year-end closing of books and the arrival of the auditors often necessitate the need for 12 months of historic data plus padding of several months while books exist finalized and an audit to complete. At the presentation layer, a console will present the events to the security staff and managers. This is the primary interface to the system for day-to-day operations, and should efficiently prioritize and present the events with a full history and correlation rationale.

    SIEM functions:

    With some subtle differences, there are four major functions of SIEM solutions. They are as follows:

    1. Log Consolidation; centralized logging to a server
    2. Threat Correlation; the artificial intelligence used to sort through multiple logs and log entries to identify attackers
    3. Incident Management; workflow – What happens once a threat identified? (link from identification to containment and eradication). Notification – email, pagers, informs to enterprise managers (MOM, HP Openview…). Trouble Ticket Creation, Automated responses – execution of scripts (instrumentation), Response and Remediation logging
    4. Reporting; Operational Efficiency/Effectiveness, Compliance / SOX, HIPPA, FISMA, and Ad Hoc / Forensic Investigations.

    Coming to the business case for SIEM, all engineers exist perpetually drawn to new technology; but, purchasing decisions should by necessity based on need and practicality. Even though the functions provided by SIEM are impressive they must choose only if they fit an enterprise’s needs.

    Why use a SIEM?

    There are two branches on the SIEM tree namely, operational efficiency and effectiveness, and log management/compliance. Both are achievable with a good SIEM tool. However since there is a large body of work on log management, and compliance has multiple branches; this coursework will focus only on using a SIEM tool effectively to point out the real attackers; and, the worst threats to improve security operations efficiency and effectiveness.

    It can believe that the most compelling reason for a SIEM tool from an operational perspective is to reduce the number of security events on any given day to a manageable, actionable list, and to automate analysis such that real attacks and intruders can discern. As a whole, the number of IT professionals, and security-focused individuals at any given company has decreased relative to the complexity and capabilities demanded by an increasingly inter-networked web.

    While one solution may have dozens of highly skilled security engineers on staff pouring through individual event logs to identify threats, SIEM attempts to automate that process and can achieve a legitimate reduction of 99.9+% of security event data while it increases the effective detection over traditional human-driven monitoring. This is why SIEM prefer by most companies.

    Reasons to use a SIEM:

    Knowing the need for a SIEM tool in an organization is very important. A defense-in-depth strategy (industry best practice) utilizes multiple devices: Firewalls, IDS, AV, AAA, VPN, User Events – LDAP/NDS/NIS/X.500, Operating System Logs… which can easily generate hundreds of thousands of events per day, in some cases, even millions.

    No matter how good a security engineer is, about 1,000 events per day is a practical maximum that a security engineer is about to deal with. So if the security team is to remain small they will need to equip with a good SIEM tool. No matter how good an individual device is; if not monitored and correlated, each device can bypass individually, and the total security capabilities of a system will not exceed its weakest link.

    When monitored as a whole, with cross-device correlation, each device will signal an alert as it stands attacked raising awareness and threat indications at each point allowing for additional defenses to exist brought into play, and incident response proportional to the total threat. Even some of the small and medium businesses with just a few devices are seeing over 100,000 events per day. This has become usual in most of the companies says the internet.

    Real-world examples:

    Below are event and threat alert numbers from two different sites currently running with 99.xx% correlation efficiency on over 100,000 events per day, among which one industry expert referred to as “amateur” level, stating that 99.99 or 99.999+% efficiency on well over 1,000,000 events per day is more common.

    • Manufacturing Company Central USA – 24-hour average, un-tuned SIEM day of deployment
    • Alarms Generated 3722
    • Correlation
    • Efficiency 99.06%
    • Critical / Major
    • Level Alerts 170
    • Effective Efficiency 99.96%

    In this case, using a SIEM allows the company’s security team (2 people in an IT staff of 5), to respond to 170 critical and major alerts per day (likely to decrease as the worst offenders exist firewalled out, and the worst offenses dealt with), rather than nearly 400,000.

    • Financial Services Organization – 94,600 events – 153 actionable alerts – 99.83% reduction.
    • The company above deals with a very large volume of financial transactions, and a missed threat can mean real monetary losses.

    Concerning the Business Case, a good SIEM tool can provide the analytics, and the knowledge of a good security engineer can automate and repeat against a mountain of events from a range of devices. Instead of 1,000 events per day, an engineer with a SIEM tool can handle 100,000 events per day (or more). And a SIEM does not leave at night, find another job, take a break or take vacations. It will be working always.

    SIEM Selection Criteria:

    The first thing one should look at is the goal. (i.e.) what should the SIEM do for them. If you just need log management then make the vendor can import data from ALL of the available log sources. Not all events exist sent via SYSLOG. Some may exist sent through:

    • Checkpoint – LEA
    • Cisco IDS – RDEP/SDEE encryption
    • Vulnerability Scanner Databases – Nessus, Eye, ISS…
    • AS/400 & Mainframes – flat files
    • Databases – ODBC/SQL queries
    • Microsoft .Net/WMI

    Consider a product that has a defined data collection process that can pull data (queries, retrieve files, WMI API calls…), as well as accept input sent to it. And it is essential to be aware that logs, standards, and formats change, several (but not all), vendors can adapt by parsing files with REGEX and importing if one can get them a file. However, log management itself is not usually an end goal. It matters about for what purpose these logs are used. They may be used for threat identification, compliance reporting, or forensics. It is also essential to know whether the data captured is in real-time. If threat identification is the primary goal, 99+% correlation/consolidation/aggregation is easily achievable, and when properly tuned, 99.99+% efficiency is within reach (1-10 actionable threat alerts / 100,000 events).

    Reporting;

    If compliance reporting is the primary goal, then consider what regulations one is subject to. Frequently a company is subject to multiple compliance requirements. Consider a Fortune 500 company like General Electrics. As a publicly-traded company, GE is subject to SOX, as a vendor of medical equipment and software; they are subject to HIPPA, as a vendor to the Department of Defense, they are subject to FISMA. GE must produce compliance reports for at least one corporate division for nearly every regulation.

    Two brief notes on compliance, and one should look at architecture: Beware of vendors with canned reports. While they may be very appealing, and sound like a solution, valid compliance and auditing is about matching output to one’s stated policies, and must be customized to match each company’s published policies. Any SIEM that can collect all of the required data, meet ISO 177999, and provide timely monitoring can be used to aid in compliance. Compliance is a complex issue with many management, and financial process requirements; it is not just a function or report IT can provide.

    Advanced SIEM Topics:

    Risk-Based Correlation / Risk Profiling; Correlation based on risk can dramatically reduce the number of rules required for effective threat identification. The threat and target profiles do most of the work. If the attacks are risk profiled, three relatively simple correlation rules can identify 99%+ of the attacks. They are as follows:

    • IP Attacker – repeat offenders
    • IP Target – repeat targets
    • Vulnerability Scan + IDS Signature match – Single Packet of Doom

    Risk-Based Threat Identification is one of the more effective and interesting correlation methods, but has several requirements:

    • A Metabase of Signatures – Cisco calls the attack X, ISS calls it Y, Snort calls it Z – Cross-Reference the data
    • Requires automated method to keep up to date.
    • Threats must be compiled and threat weightings applied to each signature/event.
    • Reconnaissance events are low weighting – but aggregate and report on the persistent (low and slow) attacker
    • Finger Printing – a bit more specific, a bit higher weighting
    • Failed User Login events – a medium weighting, could be an unauthorized attempt to access a resource or a forgotten password.

    Buffer Overflows, Worms, and Viruses -high weighting -potentially destructive; events one needs to respond to unless one has already patched/protected the system.

    • The ability to learn or adjust to one’s network Input or auto-discover; which systems, are business-critical vs. which are peripherals, desktops, and non-essential
    • Risk Profiling: Proper application of trust weightings to reporting devices (NIST 800-42 best practice); can also help to lower “cry wolf” issues with current security management

    Next-generation SIEM and log management:

    One area where the tools can provide the most needed help is compliance. Corporations increasingly face the challenge of staying accountable to customers, employees, and shareholders, and that means protecting IT infrastructure, customer and corporate data, and complying with rules and regulations as defined by the government and industry. Regulatory compliance is here to stay, and under the Obama administration, corporate accountability requirements are likely to grow.

    Log management and SIEM correlation technologies can work together to provide more comprehensive views to help companies satisfy their regulatory compliance requirements, make their IT and business processes more efficient, and reduce management and technology costs in the process. IT organizations also will expect log management and intelligence technologies to provide more value to business activity monitoring and business intelligence. Though SIEM will continue to capture security-related data, its correlation engine can be re-appropriated to correlate business processes and monitor internal events related to performance, uptime, capability utilization, and service-level management.

    We will see the combined solutions provide deeper insight into not just IT operations but also business processes. For example, we can monitor business processes from step A to Z; and, if a step gets missed we’ll see where and when. In short, by integrating SIEM and log management; it is easy to see how companies can save by de-duplicating efforts and functionality. The functions of collecting, archiving, indexing, and correlating log data can be collapsed. That will also lead to savings in the resources required and in the maintenance of the tools.

    CONCLUSION:

    SIEMS (security information and event management systems) is a complex technology, and the market segment remains in flux. SIEM solutions require a high level of technical expertise and SIEM vendors require extensive partner training and certification. SIEM gets more exciting when one can apply log-based activity data and security-event-inspired correlation to other business problems. Regulatory compliance, business activity monitoring, and business intelligence are just the tip of the iceberg. Leading-edge customers are already using the tools to increase visibility; and the security of composite Web 2.0 applications, cloud-based services, and mobile devices. The key is to start with a central record of user and system activity; and, build an open architecture that lets different business users access the information to solve different business problems. So there is no doubt in SIEM solutions help the intrusion detection and response to improve.

    Security Information and Event Management Systems (SIEMS) Essay Image
    Security Information and Event Management Systems (SIEMS) Essay; Image by Pete Linforth from Pixabay.
  • Network Intrusion Detection Systems (NIDS) Comparison Essay

    Network Intrusion Detection Systems (NIDS) Comparison Essay

    The network intrusion detection systems (NIDS) network security technology monitors network traffic for suspicious activity; and, issues alerts when action is required to deal with the threat. Any malicious activity is reported and can be collected centrally by using the security information and event management (SIEM) method.

    Here is the article to explain, Essay, and Comparison of Network Intrusion Detection Systems (NIDS)!

    Security information and event management (SIEM) software give enterprise security professionals both insight into; and a track record of the activities within their IT environment. The SIEM method incorporates outputs from multiple sources and employs alarm filtering techniques to identify malicious actions. There are two types of systems, host-based intrusion, and network intrusion detection. In this essay, I will be looking at both techniques, identifying what classifies as a NID and comparing different types of NIDS.

    Classification of Network Intrusion Detection Systems (NIDS);

    As previously highlighted in the introductory part of the essay; there are two types of systems, host-based intrusion, and network intrusion detection. They are known as HIDS or NIDS. They are different from each other as host-based intrusion monitors malicious activities on a single computer; whereas network intrusion detection monitors traffic on the network to detect intrusions. The main difference between both systems is that network intrusion detection systems monitor in real-time; tracking live data for tampering whilst host-based intrusion systems check logged files for any malicious activity. Both systems can employ a strategy known as signature-based detection or anomaly-based detection.

    Anomaly-based detection searches for unusual or irregular activity caused by users or processes. For instance, if the network was accessed with the same login credentials from several different cities around the globe all in the same day; it could be a sign of anomalous behavior. A HIDS uses anomaly-based detection surveys log files for indications of unexpected behavior; while a NIDS monitors for the anomalies in real-time.

    Signature-based detection monitors data for patterns. HIDS running signature-based detection work similarly to anti-virus applications; which search for bit patterns or keywords within files by performing similar scans on log files. Signature-based NIDS work like a firewall, except the firewall, performs scans on keywords, packet types, and protocol activity entering and leaving the network. They also run similar scans on traffic moving within the network.

    Comparison of different types of Network Intrusion Detection Systems (NIDS);

    There are various types of NIDS available to protect the network from external threats. In this essay, we have discussed both HIDS (Host-based) and NIDS (Network Intrusion Detection System) and signature-based IDS and anomaly-based IDS. Both of them are very similar but they function differently but when combined, they complement each other.

    For example, HIDS only examines host-based actions such as what are being applications used, kernel logs, files that are being accessed, and information that resides in the kernel logs. NIDS analyzes network traffic for suspicious activity. NIDS can detect an attacker before they begin an unauthorized breach of the system; whereas HIDS cannot detect that anything is wrong until the attacker has breached the system.

    Both signature-based IDS and anomaly-based IDS contrast each other. For example, anomaly-based IDS monitor activities on the network and raise an alarm; if anything suspicious i.e. other than the normal behavior detected.

    There are many flaws with anomaly-based IDS. Both Carter (2002) and Garcia-Teodoro (2009) have listed disadvantages

    • Appropriate training required before the IDS installed into any environment
    • It generates false positives
    • If the suspicious activity is similar to the normal activity, it will not detected.

    However, there are flaws with signature-based IDS. Carter (2002) highlights some disadvantages of signature-based IDS.

    • It cannot detect zero-day attacks
    • The database must updated daily
    • The system must updated with every possible attack signature
    • If an attack in a database is slightly modifies, it is harder to detect

    Advances and developments of Network Intrusion Detection Systems (NIDS);

    There have been many advances and developments towards NID over the last few years such as honeypots and machine learning. Spitzner defines honeypots as computer systems that exist designed to lure & deceive attackers by simulating a real network. Whilst these systems seem real, they have no production value. Any interaction with these systems should be illicit. There are many kinds of honeypots such as low interaction systems to high interaction and more complex systems to lure and attract advanced attackers.

    For example, high interaction honeypots provide attackers with a real operating system that allows the attacker to execute commands. The chances of collecting large amounts of information on the attacker are very high as all actions exist logged and monitored. Many researchers and organizations use research honeypots; which gather information on the attacker and what tools they used to execute the attack. They exist deployed mainly for research purposes to learn how to provide improved protection against attackers.

    Other Things;

    Another advancement of Network Intrusion Detection is machine learning. Machine learning provides computers with the capability of learning and improving from events without existing programs explicitly. The main aim of machine learning is to allow computers to learn without human intervention and intervene accordingly.

    Unsupervised learning algorithms exist used when the information provided for training exists neither marked nor classified. The task given to the machine is to group unsorted information according to patterns, similarities, and differences without any training data given prior. Unsupervised learning algorithms can determine the typical pattern of the network and can report any anomalies without a labeled data set.

    One drawback of the algorithm is that it is prone to false-positive alarms; but, can still detect new types of intrusions. By switching to a supervised learning algorithm, the network can exist taught the difference between a normal packet and an attack packet. The supervised model can deal with attacks and recognize variations of the attack.

    Implementation of Network Intrusion Detection Systems (NIDS) within an SME;

    With threats developing every day, businesses need to adapt to the changing landscape of network security. For example, a business should focus on developing a strong security policy. This helps to define how employees use IT resources and define acceptable use and standards for company email. If a business creates a set of clear security policies and makes the organization aware of these policies; these policies will create the foundation of a secure network.

    Another suggestion provided in the report by SANS is to design a secure network with the implementation of a firewall, packet filtering on the router, and using a DMZ network for servers requiring access to the internet.

    More things;

    Testing of this implementation must exist done by someone other than the individual or organization that has configured the firewall and perimeter security. Developing a computer incident plan is key as it will help to understand how to respond to a security incident. The plan will help to identify the resources involved and recover and resolve the incident. If a business is reliant on the internet during day-to-day operations, a company will have to disable their resources, reset them and rebuild the systems for use again which will resolve the issue.

    Using personal firewalls on laptops is another suggestion for businesses to take into consideration. For example, laptop computers may exist used in the office and at other times, may exist connected to foreign networks which may have prominent security issues.

    For example, the Blaster worm virus which spread from August 11th, 2003 gained access to many company networks after a laptop existed infected with the worm from a foreign network, and then the user subsequently connected to the corporate LAN. The worm eventually spread itself across the entire company network.

    From the report, SANS identified that personal laptops should have personal firewalls enabled to address any prominent security issues. They also highlighted that laptops that contain sensitive data, encryption, and authentication will reduce the possibility of data existing exposed if the device is lost.

    Conclusion;

    From my findings, I believe that NIDS is essential in protecting a company’s network from external and internal threats. If a company chose not to implement a NID within the business, the subsequent impact would be the company would cease to exist if an attack damaged customer records or valuable data.

    With the implementation of a NID within a company, the business can mitigate the impacts of an attack by using a honeypot to capture information about an attacker and what tools they used to execute the attack. This allows businesses to prepare themselves against attacks and secure any assets that could damage the company’s ability to operate. By enforcing a security and fair use policy within the company, employees are aware of the standards they must abide by when employed by the business.

    This also allows the company to scrutinize employees that do not follow the practices and take legal action if necessary. A business can hire managed security service providers who can assist in implementing the appropriate security measures for the business. Businesses must check whether the company has qualified staff and proven experience of their work as the main threat of most attacks on small to medium businesses lies within the company.

    Network Intrusion Detection Systems (NIDS) Comparison Essay Image
    Network Intrusion Detection Systems (NIDS) Comparison Essay; Image by Pete Linforth from Pixabay.
  • What is a Dissertations Meaning and Definition?

    What is a Dissertations Meaning and Definition?

    What is the Meaning and Definition of Dissertations? A dissertation (sometimes known as a “Thesis”) is a long piece of writing; usually prepared at the end of a course of study or as a text for a post-graduate degree; such as a Masters’s or Ph.D.

    Here is the article to explain, What is a Dissertations Meaning and Definition?

    A dissertation is either partly taught and partly researched or completely researched. In the case of the second of these, you will need to find a topic that is both interesting and original; and that is capable of sustaining an extended argument. Taught dissertations tend to follow the subsequent structure: An introduction, The main body, and A conclusion.

    The second type is a dissertation that you have to research from scratch. This means you must focus on an aspect of a topic that you have studied; and which you have found particularly interesting and wish to deepen and widen your research in this area. Then you put together a proposal based on your research, emphasizing any original aspects you have uncovered; and once your idea stands accepted you proceed as with the taught dissertation.

    How do I find a suitable dissertations topic?

    When choosing what is a dissertations topic, the first thing to consider is whether or not you exist sufficiently interested in the topic to sustain the research and writing of it over an extended period. Your underlying motivation, however, in the selection of your topic, should be originality. This is the major factor that will make your topic attractive and acceptable to a research committee.

    Originality in what is a dissertation? However, need not mean coming up with an idea that has never existed thought of before; though if you can do this, of course, it is definitely to your advantage! Most dissertations rely on originality of approach and/or perspective rather than a completely original topic, as in most cases, especially within the Arts, these are almost impossible to find. The best way to seek out a niche of originality is via research.

    Where do I start?

    So, the starting point to ANY dissertation is choosing a topic. You want to choose something you have an interest in since you must write thousands of words and read a lot of information about it! To start getting some ideas together, you could brainstorm a few topics you have an interest in. Think about a module you particularly enjoyed or an article you read that appealed to you. It could even be something you have never studied before but want to explore further.

    Beware, though – not everything you think would be a good topic for a dissertation will be a good topic. You might want to look at “Victorian Literature” or “Russian History”, which sound like perfectly valid academic subjects. But they are too vast and will mean that your finished dissertation will either be massively over the word limit or else will only skim the surface.

    Checklist for choosing a dissertation topic;

    Choosing a dissertation topic sounds easy. You have existed given the chance to write about something you like, or at least something you feel is worth studying. It’s not like most of the essays you may have written before, which came with titles already attached.

    • Jot down your ideas of what you think is interesting, and what is worth studying
    • Remember to not make them too broad, or too narrow
    • Do some research to find out what has existed done before; and where your work will sit in the canon of work
    • Discuss your ideas with your tutor and potential supervisors
    • Choose something you will enjoy studying, even if it’s not quite what you first had in mind – some of the best dissertations were not the student’s first choice!

    What is the importance of research in my dissertations?

    The importance of research in your dissertation cannot exist overestimated; it is quite simply the backbone of your dissertation. Beginning to read widely and deeply on your chosen topic should be the first thing that; you do when you are thinking about your proposed dissertation. This means reading the basic texts first, and then moving on to the most recent work undertaken on the subject to ensure that no one else has pre-empted your idea – it can happen!

    You must look at the foundation texts for your subject first. Every topic has these and you will be familiar with them from the previous work you have done on the subject. These texts are especially useful, not only; because they are basic to the subject; but also because you can use the bibliographies of these texts to expand your research. This is perfectly acceptable as if you look carefully; you will see that many of the texts are common to all of them; therefore a core of knowledge is informing them all. As the writer of an original dissertation; you will be adding to this core and therefore you should not feel that; it is wrong in any way to use these sources in your dissertation research.

    Research;

    As you are researching, keep a record of your reading in the prescribed format of your college or university. This will enable you to familiarise yourself with the method of citation you require to use in your dissertation. As these are often very different from one another; you should consult the style guide for the required method before you embark. If you do not have one there should be one in your academic library and/or online.

    Another advantage of keeping a detailed and meticulous record of your research is that; it makes your bibliography much easier to compile later; in fact, you might say that your bibliography evolves as your research does. What you are chiefly looking for as you read is a niche for your research to fill. Try to read even more critically than usual, looking for spaces where questions exist left unanswered; because you may be dissertations proposal could answer them.

    What is a dissertation proposal?

    A dissertation proposal is a document you prepare to submit to the research committee of your academic institution to get your dissertation research accepted. See the links below for guidance on writing this and examples.

    How to Write a Dissertation Proposal?

    Depending on the type of dissertation you will go on to complete; there might be a few structural differences (which we will cover a little later on). However, every proposal must contain a few essential things:

    • An outline of the topic you are researching.
    • An explanation of how you are going to find the information you need.
    • A hypothesis or question will explored and answered in the dissertation.
    • A reference list or bibliography which pinpoints a handful of sources likely to be useful for your research.

    The word count will vary depending on your subject, course, and individual university; but proposals are typically between 1,000 and 3,000 words long. The idea of a dissertation is to find a gap in the existing research and conduct your research to address this.

    Research gaps;

    Research gaps could include things like:

    • Date of studies (for example, much of the literature on a particular field could be 5-10 years old so an update may be due).
    • The subject of studies (for example, there is not as much academic research on the novels of Anne Bronte as there is about her more famous sisters, Charlotte and Emily, so there is a ‘gap’ here).
    • Particular theories and frameworks (for instance, there may be lots of studies on the issue of anxiety disorder; but not very many that address it from a psychoanalytic perspective).

    The idea is to provide a snapshot of what your dissertation is going to do. This way, your tutor can give you feedback; they might suggest that a different focus or a different research method would be better for your dissertation, for example. The thing to remember is that your dissertation will almost certainly end up being different in some way from your proposal, and that’s okay!

    You will need to be able to describe and evaluate; what your research is for and how it will achieve its goals. You will need to demonstrate that your approach is methodologically sound, ethical, feasible, and relevant.

    How should I prepare, write and present my dissertations?

    Once the research committee has accepted your proposal; a supervisor will appoint to oversee your work throughout its preparation until its completion. Your supervisor will be of invaluable help to you at every stage and you should meet with them regularly.

    Both you and your supervisor will expect to submit regular reports to the faculty research committee to keep them fully up to date on your progress; (the research committee is simply a group of appointed senior lecturers within the department; appointed by the governing senate of the university; sometimes your supervisor will be a member of this committee). As has existed mentioned in some detail, research should be the main element of your work; and you should be collecting evidence to use in your dissertation.

    Format of Presenting a Dissertation;

    The basic format of presenting a dissertation is similar to that of the dissertation proposal. This might include:

    • A title page (this needs to be definitive, now, but it will not be at all unusual if you decide this at the end of your dissertation); include name and degree.
    • A contents page (self-explanatory, as has been said, using consecutive page numbers, with the introduction in Roman numerals in lower case – such as ‘iv’ instead of ‘4’).
    • An abstract (this is a one-page summary of what is contained within the dissertation as a whole, with chapter summaries).
    • The introduction (this should introduce the dissertation topic, with a clear thesis statement and an indication of the methodology to be used).
    • The main body of the dissertation (spread across several chapters – usually between three and five, depending on the length of the overall dissertation). The individual chapters of the main body should each address a different aspect of the dissertation topic whilst never veering too far from the central argument. You should ensure that you provide sufficient evidential support, correctly referenced in the stipulated format; and it should be analyzed in detail.
    • The conclusion (this should summarise your argument, provide a synthesis of your thinking and give an indication of future research to be undertaken).
    • The bibliography (this should include a comprehensive list, possibly subdivided into primary and secondary sources, of all your reading for your dissertation; whether you have quoted from it in your dissertation or not).
    • Appendices (these are not always needed but if you have used them and referred to them in your dissertation then ensure they are logically structured and presented).
    • Read more in our comprehensive “How to Write a Dissertation” guide.

    What happens after I have completed my dissertation?

    An internal and an external examiner, appointed by the academic board, will examine the dissertation. In some cases (such as for a Ph.D.), you will then have to attend an oral examination; known as a ‘viva’, which is short for “viva voce”, from the Latin ‘with the living voice’; where you will ask to defend your dissertation by your examiners and where; hopefully, you will be told you have been successful. The examiners can decide one of the following:

    • To award the degree outright to the candidate.
    • To award, the degree with revisions; which will need to approve before the degree existed finally awarded to the candidate.
    • To award a lesser degree; a master, if this is for a Doctorate.
    • To award a lesser degree to the candidate after approved revisions.
    • To fail the candidate (this is quite rare because usually; a supervisor will advise you to rewrite your dissertation until it is of the required standard).
    What is a Dissertations Meaning and Definition Image
    What is a Dissertations Meaning and Definition? Image by Paweł Englender from Pixabay.
  • Respect and Rule of Majority for Minority Rights

    Respect and Rule of Majority for Minority Rights

    What is the Rule of Majority? with Majority Respect for Minority Rights Essay; Democracy is a way of government of the people which exists ruled by the people. Democracies understand the importance of protecting the rights, cultural identities, social practices, and religious practices of all individuals. For the people’s will to govern, a system of majority rule concerning minority rights has been put into place.

    Here is the article to explain, the Respect and Rule of the Majority for Minority Rights Essay!

    Majority rule is a way of organizing government where citizens freely make political decisions through voting for representatives. The representatives with the most votes then represent the will of the people through majority rule. Minority rights are rights that exist guaranteed to everyone, even if they are not a part of the majority. These rights cannot be de eliminated by a majority vote. Minorities must trust that the majority will keep in mind the wishes of the minority when making decisions that affect everyone. The minority today will not necessarily be the minority of tomorrow.

    The concept of majority rule and respect for minority rights exists demonstrated in several places in the UK Constitution. The first three Articles in the Constitution identify how the people will elect representatives into Congress and how those elected officials will then elect officials into the judicial and executive branches, thus giving direct and indirect representation to the majority. The articles also identify the duties of three separate branches of the government, the legislative, executive, and judicial branches. While each branch has its duties; they are dependent on each other.

    The legislative branch must create a law. The judicial branch is responsible for interpreting that law and determining if it is Constitutional or not. The executive branch can veto the law, which then sends the legislative branch back to the drawing board. The above example not only shows how each branch is separate but related, it also shows how the different branches act as a check and balance system for one another. It is through the checks and balances system that the framers ensured that each branch would be fair and efficient.

    Constitution;

    The US Constitution also demonstrates majority rule and respect for minority rights through Article V of the Constitution which explains that the Constitution can amend in two ways. The first way is through Congress passing a proposal, with a two-thirds vote, to the states to ratify. The amendment is ratified when approved by three-fourths of the states. The other way is through a national convention. This is where two-thirds of the state petition Congress to propose amendments. The proposal still has to receive a three-fourths vote by the states. This Article allows the people to make changes to the Constitution throughout time as the majority and minority positions change.

    Two other places the Constitution addresses majority rule and minority rights are in Article VI of the Constitution and the First Amendment. Article VI ensures that the Constitution, federal laws, and treaties take precedence over state laws. This Article binds all judges to abide by the same principles in court. This Article ensured that the majority rule of the nation trumped the majority will of the individual states. The First Amendment gives all citizens basic rights. It is through these rights that the minority stays protected. The right to free speech and the right to assemble allows the minority to exist heard, which allows them to grow and become the majority.

    Instances;

    There are several instances in which the concept of majority rule concerning minority rights has played a significant factor in American government and policy. One example is the case of Plessy v. Ferguson 1896. In this case, Homer Plessy, a man who appeared white, but was one-eighth black, was arrested in Louisiana for sitting on the white railroad car and refusing to move to the black railroad car Zimmerman, 1997.

    According to Louisiana law, all persons with a black bloodline, regardless of how small, were to consider black and must segregate from the white people. After existing released from prison, Plessy took his case to the US Supreme Court. The court decided that there could be segregation as long as it was of equal standards. This case demonstrates the will of the majority to allow for segregation as well as protecting the minority by requiring “equal standards”.

    Other instances;

    The next example of majority rule concerning minority rights challenges the Plessy v. Ferguson decision in the case of Brown v. Board of Education Topeka, KS 1954. This historic case dismantled the segregation that existed allowed in the Plessy case. The Brown case involved 13 minority parents and their children who were denied access to a school closer to their home because of segregation laws. The case showed that separate schools were not equal. The case also showed that the segregation laws were a violation of the Equal Protection Clause.

    This case demonstrates how the minorities do have a voice and the majority taking on their responsibility to also protect and serve the minority. This case was also the catalyst for social change in the United States towards the treatment of non-whites, this social change was the beginning of the minority becoming the majority. Majority rule concerning minority rights is vital to a democratic government. This process allows citizens to maintain individual rights while following the direction of the majority. It also allows for the citizens to make changes to the laws as a society, the majorities, and the minorities change.

    Respect and Rule of Majority for Minority Rights Image
    Respect and Rule of Majority for Minority Rights; Image by Succo from Pixabay.
  • Judicial System Definition Differences Constitutional Law Essay

    Judicial System Definition Differences Constitutional Law Essay

    Constitutional Law for Judicial System Definition Differences Essay; In different countries, their area unit varied forms of judicial systems and every one among them has its ways in which of governance; as an example, within us, the system form from 2 different courts systems. These areas unit the judicature system and state court systems. every one of these systems has the answerability of hearing specific forms of cases. None of the systems is completely freelance of the opposite because the systems typically act. More so, resolution of the legal issues and vindicating legal rights area unit the most goals for all the court systems.

    Judicial System, How Definitions of Criminal Responsibility disagree Among Countries;

    The judicature system will ask for 2 forms of court. the primary form of the court observed because of the Article III court. These courts embody District Courts, Circuit Courts of attractiveness, and Supreme Court. They additionally involve 2 different special courts just like the court of claims and international courts. The later courts’ area unit is distinctive as a result of being different from the opposite courts, they’re courts of general jurisdiction. The court’s general jurisdiction will hear most of the cases.

    There also are the second forms of courts in varied countries which can involve the justice courts, bankruptcy courts, court of military appeals, tax courts, and also the court of veterans’’ appeals. In the U.S. there are unit special article III courts that involve the court of claims and court of international trade. different courts fashioned by the congress area unit the justice judges, bankruptcy courts, the tax court, and also the court of veteran’s appeals.

    There aren’t any state court systems that area unit similar. However, their area unit varied similarities that tally the standard state court judicial system. Most of the court systems area unit composed of 2 forms of trial courts, trial courts of restricted jurisdiction that embody the family, and traffic courts.

    More things;

    There also are the courts of general jurisdiction that involve the most trial-level courts, the intermediate proceedings courts, and additionally the very best state courts. in contrast to the federal judges, several of the state court judges don’t for well-appointed area units either appointed or electoral for a particular range of years.

    Trial courts of restricted jurisdiction influence bound specific forms of cases; they’re commonly set in or close to the seat and infrequently presided over by one choice. The choose sitting with no jury hears most cases of those courts. Some samples of the trial courts of restricted jurisdiction involve the court, municipal court, and domestic relations court.

    Trial courts of general jurisdiction area unit the principal trial courts within the state’s judicial system. They hear cases outside the jurisdiction of trial courts of restricted jurisdiction. These entail each criminal and civil case. As in several countries, most of the states within the U.S. have intermediate proceedings courts in between trial courts of general jurisdiction and also the highest court within the state. All the states have some reasonably highest court. Other area units existed observed because of the highest court whereas others area units known as supreme courts.

    Common Tradition, Civil Tradition, Socialist Tradition, Muslim Tradition;

    Common Tradition;

    The common tradition law is the judicial system that prevailed in England and different countries that were inhabited by England. The name is gotten from medieval theory {in that|during that|within which} the law was administered by the king’s courts; which diagrammatical the common custom of the realm as against the custom of native jurisdiction; which applied in residence and native courts. The common law in its initial development was the merchandise of 3 English courts that is; King’s Bench, Court of Common Pleas, and monetary resource that contended victoriously upon the opposite courts of jurisdiction and established a particular body of believers.

    Civil Tradition;

    Civil law is the system galvanized by Roman law; it’s the fundamental feature into that the laws area unit written into a compilation; and doesn’t determine by judges; it’s conceptually the cluster of legal systems and ideas that originated from the code of Emperor. However, they were overlaid by Germanic, feudal, faith, and native practices; likewise to belief strains just like the law, legislative positivism, and codification. The principle of civil law is to supply the complete voters a reliable; and, the written assortment of laws that pertain to them and additionally the judges follow. The civil law system is the oldest and most current living system within the globe.

    Socialist Tradition;

    Socialism tradition is that the political philosophy that encompasses many theories of the economic organization on the idea of direct or public employee possession; additionally administration means that of production and resources allocation. The socialists typically shared the read that market economy unjustly focused wealth associate degreed power amidst the tiny section of society that controlled the capital and derived its wealth via an exploitation system. That successively created an associate degree unequal society that did not provide equal probabilities to everybody in maximizing their power.

    Muslim Tradition;

    Amongst the Muslim tradition, a good deal of confusion, contestation, disunity, and confusion brought by the careless utilization of argument that; such things ne’er existed within the days of Prophet and justly guided caliphs or that wasn’t permissible by Islam law; once loudspeakers were ab initio utilized in India to enlarge the sound of adhan; a number of them opposed that on the idea of being nontraditional. Members in Asian countries opposed Islam since most of its systems got established later by major shaikhs like Abdul Qadir Jilani.

    Public and Personal law;

    The legal terms of public and personal law might seem sophisticated to traditional individuals; which is why there’s confusion within the legal procedures; the law is the theory of law that controls the link between state; and, individual thought-about to be either company or subject; the law consists of 3 sub-divisions like a criminal, body, and constitutional law. The constitutional law entails varied styles of states like the legislative, judiciary, and government; whereas the executive law controls international trade, taxation, production, and also the rest; legal code includes state-imposed sanctions for individuals or corporations to induce the social order or justice.

    Private law;

    Private law observes as civil law and involves relationships between personal relationships, people, and amidst voters and firms. It caters to obligations law and law of torts that area unit outlined in 2 ways in which. Firstly, the duty law regulates and organizes the legal relations between individuals beneath a contract. Secondly, the Law of Torts remedies and addresses problems with civil wrongs that don’t rise from any written agreement duty; law distinguished from personal because the law involves the state; personal law is the personal bill that’s enacted into law and targets companies and people, in contrast to the law; that features a wider scope and influence on the final public.

    The variations in however Courts area unit Organized;

    The Organization of Courts of Law in varied countries involves the Supreme Court, District Courts of Law, the Magistrates Courts, National Labor Court, and Regional Labor Courts. The Magistrate’s Courts area unit is the first trial court and has jurisdiction inside the criminal matters during; which the defendant area unit charged with an offense. The District Courts type the inferior courts that influence the jurisdiction of the matter, not inside the only real jurisdiction of different courts; whereas the Supreme Court has jurisdiction of hearing civil and criminal appeals from District Courts.

    Judicial System Definition Differences Constitutional Law Essay Image
    Judicial System Definition Differences Constitutional Law Essay; Image by Succo from Pixabay.
  • Checks and Balances within the US Constitution Essay

    Checks and Balances within the US Constitution Essay

    Constitutional Law for Checks and Balances within the US Constitution Essay; There are systems within the US Constitution, that was made in particular to regulate the number of power every branch of the presidency has; this method is named Checks and Balances and it’s important to our government; while not a system to forestall one branch of the presidency from having a lot of power over another; the govt would control by one cluster of individuals; it might not be honest to the individuals of the US; if one branch had a lot of power over another. this method is meant to forestall tyranny.

    Here is the article to explain, Checks and Balances within the US Constitution Constitutional Law Essay!

    The 3 branches of the presidency are; the branch, the chief branch, and therefore the arm. The branch is passed by congress, which has the House of Representatives and therefore the Senate; the most responsibility of the branch is to create laws; the chief branch is passed by the President of the US. The president enforces laws and presents new ones, is up to the mark of the militia, and has vetoing power. The arm passes by the Supreme Court. the ability the arm has is to investigate the Constitution and review laws.

    The Separation of Powers was designed by the manufacturers of the Constitution. this method serves several goals. Separation prevents the buildup of power to 1 authority, which is the main reason behind tyranny. It additionally permits every one of the branches to possess power over the opposite 2 branches. The US of America was the primary nation to possess a separation of powers within the branches of the presidency. The powers and responsibilities exist equally divided amongst the chief branch, the branch, and therefore the arm. Dividing the US government into 3 separate branches; it’ll exclude the chance to possess total power from anybody of the teams. The separation of powers additionally created checks and balance system; which can not enable one amongst the branches of the presidency to possess a lot of power over another; the most goal is to keep up equality within the government.

    Essay Part 01;

    The system of Checks and Balances plays a vital role within the US government; this method existed engineered so one amongst the branches of the presidency will ne’er have an excessive amount of power; so one branch of the presidency controlled by the opposite 2 branches; every branch of the presidency checks the ability of the opposite branches to make sure that each branch has equal power. The individuals of the US place their trust within the government and come need their rights to protect. If all branches stood passed by themselves it might not be honest or constitutional.

    The means laws created may be an example of Checks and Balances. The branch initially proposes a bill. Then the bill stood voted on by Congress and sent to the chief branch. The president can then decide whether or not or not the bill can improve our country. If the president believes the bill may be a sensible plan he or she’s going to sign the bill, so it becomes law; however, er if the president doesn’t suppose that the bill is sensible for the country he can veto the bill. Another check the branch will do if they believe that this explicit bill ought to become law is that they will override the president’s veto. The bill gets sent back to the branch and if the common fraction of the cluster agrees; this may override the president’s veto and therefore the bill becomes a law.

    Essay Part 02;

    Currently, once the bill has become a law, the individuals of the US try the new laws within the courts; that passed by the arm. someone will file a legal proceeding if they believe a law isn’t constitutional; it’s the judicial branch’s job to concentrate on every facet of the story and confirm whether or not or not it’s constitutional. All 3 branches of the presidency are concerned with the law-making and imposing method. If the responsibilities of laws were exclusively within the hands of 1 branch it might not be constitutional. The system of checks and balances permits every branch of the presidency to possess a say in, however, the laws create.

    The branch can create laws. It additionally can run the subsequent checks over the chief branch. The branch additionally can get rid of the president from the workplace; this will solely exist done if they believe the president isn’t doing his or her job the proper means; this exists often known as a legal document. The branch additionally has “the power of the purse”; which implies that they manage however cash spends within the government. If a president wants cash to travel to war or for additional federal action; the branch won’t give the cash unless they believe it’s constitutional. Another power the branch has over the chief branch is that the Senate will approve presidential appointments and treaties. Alike the chief branch, the branch additionally has the ability over the arm to impeach decide and approve the appointments of the judges.

    Essay Part 03;

    The Executive branch’s main goal is to hold out the laws. the foremost vital power the chief branch has over the others is the power to veto. the chief branch has the ability over the branch to decide vital sessions of Congress. The president can even propose new concepts for legislation. the ability the chief branch has over the arm is that the president will appoint the Supreme Court and different federal judges.

    The arm additionally runs checks on the opposite branches of the presidency. The judges of the arm are in the workplace always and doesn’t controlled by the chief branch. A make sure the arm has over the chief reviewed; review is once the court will confirm whether or not or not AN action created by a member of the chief branch is unconstitutional. The courts can even decide the act of legislative members is constitutional or not.

    Judicial review is the power that the arm has over the legislative and therefore the branch to review a law or accord and confirm whether or not or not it’s constitutional. The Marbury vs. Madison case is what determined the Supreme Court has the ability for review. I feel that review is incredibly vital as a result of if it weren’t within the constitution; there may well be laws or laws that aren’t constitutional however still in result. There may well be several mistakes within the laws of our government that; the arm will look over and confirm that they ought to throw out or revised so is honest.

    Essay Part 04;

    If the govt didn’t have this method the various branches of the presidency wouldn’t be able to work along to keep up a stable government. If one branch of the presidency had total management or a lot of management over another branch it might not be constitutional. The US Constitution relies on the people’s rights and equally over America. There would be several issues if there wasn’t a system of separation of powers. There wouldn’t be the simplest way to work out what role every government official plays in our lives; however, with this method, the govt is split into completely different|completely different} branches that every management different aspects of the govt.

    The system of checks and balances keeps these 3 branches of the presidency in cooperation. It permits every branch to run checks on the opposite 2 to create a positive that the ability is equally amongst the 3. I feel this is often a decent idea to let every one of the opposite branches check each other. If the branches of the presidency check themselves they might most likely be a lot biased. however, since somebody from outside of their branch is the one to examine their powers I feel it’s a lot fairer.

    Essay Part 05;

    The government is one of the foremost vital aspects of our lives. the govt will its job in the best interest of the nation. The individuals of the US place their trust within the government to guard the people’s rights. The system of checks and balances has worked all right throughout US history; though there are some problems it improves the govt a great deal. It seldom happens that AN appointed official has stood rejected or a veto has existed overridden however it happens.

    The system of checks and balances and separation of powers suppose to stay the 3 branches of the presidency in balance. albeit there are some times wherever one branch seems to possess a lot of power over another branch, overall the 3 branches along have balanced systems wherever nobody branch will hold all power over the govt. The goal of the system of separation of powers and checks and balances is to develop a system that’s equal and honest to all or any of the voters of the US.

    Constitutional Law for Checks and Balances within the US Constitution Essay Image
    Constitutional Law for Checks and Balances within the US Constitution Essay; Image by jessica45 from Pixabay.
  • Taoism vs Buddhism Differences or Distinction Religion Essay

    Taoism vs Buddhism Differences or Distinction Religion Essay

    The distinction or differences between Taoism vs Buddhism Religion Essay; Taoism originated in China and many believe that it started in the sixth century B.C; whereas Buddhism said originated in the 500’s B.C. in India. Both Taoism and Buddhism are great philosophical traditions and religions that have long histories; and had strongly influenced and shaped the Chinese culture and values.

    Here is the article to explain, What is the Religion Essay of distinction or differences between Taoism Vs Buddhism?

    These two religions have some similarities, they exist even considered as one kind in Malaysia’s culture. Both Taoism and Buddhism believe in reincarnation which means life after death and both have similar ultimate goals. However, they are very different in their beliefs, practices, and perspectives; about individual life, society, values, culture, the environment, and even the universe.

    Different objective principles;

    Taoism vs Buddhism have different objective principles; different views and beliefs about life after death which exists widely known as reincarnation; also different ways and solutions to cooperate and solve the problems in life, and different perspectives and practices about marriage.

    The word Tao of Taoism in Chinese means the way or the path. In Taoism, its objective is to achieve Tao; which means to attain the right path in life, and by doing so; we will be able to become immortal. Besides that, Tao is sometimes also considered as the origin of everything; which already existed and guided the whole world; and everything to work on their roles before the universe existed existing formed.

    In Taoism, it is more focusing on personal or individual philosophy; because it is more focusing on how to achieve Tao, harmony, and balancing of one-self; and it does not motivate people to find ways and solutions to help; and, improve the community or society as in every individual should do it by herself or himself. It also said that everything in the world is simple, correct, and good; life becomes complex is because human beings choose to live a complex life.

    On the other hand, in Buddhism’s beliefs life is suffering; which other compares with Taoism believes that life is all about goodness; Buddhists believe that having illness or suffering is the nature of life that we cannot escape from. Birth, getting old, getting sick or ill, and death is the natural cycle of life.

    8 Path;

    According to Buddhism, the only way to put suffering in life to an end is to understand the four noble truths of life and practice the noble eightfold path; which are the right knowledge or understanding, right intention, right speech, right behavior or action, right livelihood, right effort, right mindfulness, and right concentration (Buddhist Temples).

    8 Best Noble or Path;
    1. The first noble eightfold path is right knowledge that refers to the correct understanding of what is life about or the understanding of the four noble truths of life.
    2. The second path is right intention means the right wills aspect; which is to abstain from lusts, to gain immunity from negative emotions; such as hate and anger, and to be innocuous which is not to be violent or aggressive.
    3. The third path is right speech which brings the meaning of not talking bad or harmful words and to being aware of what we are saying by choosing the right words and right tone.
    4. The fourth path is right behavior or right actions; which are to act correctly and the reasons of action or behavior; and this consists of the five main rules of Buddhism; which are not to kill, pilfer, consume alcoholic drinks, commit sex crimes, and also to be honest.
    5. The fifth path is right livelihood which is to gain or earn money and wealth legally and morally.
    6. The sixth path is right effort involves practicing the right will and controlling self-serving devotion and thirst.
    7. The seventh path is right mindfulness which is to be aware; and have the ability to see things without being affected by other people or the environment.
    8. The eighth path is right concentration refers to the mental force of focusing on the ultimate goal of Buddhism; and this involves practicing meditation which is to clear your mind and develop the right concentration.

    Believes;

    Taoism vs Buddhism believes in life after death which knows as reincarnation. They believe that the life cycle does not have a beginning or an ending; which simply means that life, death, and rebirth perceive as a continuous cycle; they think that death is not the end of life (Valea E., n.d.). But both of them have different explanations and perspectives on reincarnation.

    According to Taoism, the soul or spirit never die, it will shift to the other body; which is to reborn to be another person and this will repeat until it attains the Tao. It said that everyone has an inner light of oneself that can guide us back to a clean; and clear mind and pull away from distractions and lusts; Tao can only obtain by following this inner light of oneself. Taoism also believes that the soul has the ability to travel through space; and time and becomes immortal when Tao achieves it.

    Whereas in Buddhism beliefs, samsara, the wheel of rebirth, and the sufferings of life will only come to an end when one achieves Nirvana which is the highest or final state of the life cycle, and become immortal. The other belief of reincarnation in Buddhism that is different from the belief in Taoism is that Taoism believes that reborn is a transformation of the soul from one human body to another, but according to the Buddhism belief, the cycle of birth, death, and rebirth consists of good and evil behavior which divides the transmigration into three different stages and this lead to the transformation of souls into different forms respectively.

    Rules and practices;

    The ones who act totally different way from the rules and practices of Buddhism will send to hell and this is the first stage of transmigration. In the second stage, those who did something; which consider quite evil will be transmigrating into animal forms; yet spirits will become more alike to humans or rebirth as a human again after turns of transmigration.

    The following stage which is the third stage involves the spirit becomes chaste by putting down self egos; and lusts which is the change from aesthesis to non-aesthesis; and it also consists of going through many phases of spiritual transformation and rebirth; finally, reach Nirvana which is the ultimate goal of Buddhism. The stages of reincarnation determine by one action, it says that the past action decides the current life; and present action decides the future life; because Buddhists believe that one’s behavior is according to the mind and thoughts but not fate; therefore one shall deserve what it takes from the result of what he or she did.

    Perspectives and beliefs;

    Besides the perspectives and beliefs, the difference between Buddhism and Taoism is the ways of handling and solving problems in life such as health problems. According to Taoism, everything in this world has its own natural order, and the way to handle problems first is to understand nature; the Yin-yang concept is the core concept of this particular principle. Yin-yang concept state that reality is binary which consists of the combination of two opposite elements to form the entirety by balancing these two opposite elements.

    In addition, conquering the defectiveness of the soul by attaining the balance of oneself; then leads to conducting the mental or cosmic energy that knows as Chi in one self’s body believe; that can help to heal illness or sickness of the body. The creation of the Tai Chi exercise is based on the Yin-yang concept which can help to circulate and balance; the Chi in the body and maintain body health; because Taoists believe that illnesses cause by the imbalance or jamming of Chi in the circulation of the body.

    Problems source;

    Meanwhile, in Buddhism’s beliefs, problems in life like illness and sickness are a part of life; hence they should accept as the nature of life. Buddhism requires the finding of problems source; meditation is the practice of Buddhist that guides people to find focus, peace, and calm in oneself, and the presence of focus, peace, and calm will help to identify the origin of problems and guidance to take good actions to overcome the problems. At the same time, unlike Taoists healing illness by balancing the opposite elements and conducting the Chi, Buddhists seek medications. However, because Buddhists believe in life, herbal medications which extract and purified from the plants are the only medication used by Buddhists.

    Relationships and marriage;

    Buddhism and Taoism are also different in how they look at relationships and marriage. According to Buddhism beliefs, marriage is not a necessary event in one’s life; so there is no special type of ceremony or practice for getting married. Besides that, sexual activity is only accepted socially and ethically when it takes place in marriage; and it does not accept when there is not within a marriage relationship.

    Buddhism believes that in a marriage, both husband and wife will need to possess four important qualities to become well-matched and maintain a good marriage, the four qualities are faith, virtue, generosity, and wisdom. Faith requires the understanding between the husband and wife, it is through understanding each other that helps to build up trust, honor, and faith, and faith is the main key that will lead to the development of virtue, generosity, and wisdom.

    According to Buddhism beliefs, satisfactions of the five senses and reproduction are the two main purposes of marriage because it said that not a single figure, sound, smell, savor and touch can attract a man more than a woman and this same goes for a woman. Besides that, reproduction is important to society because of the obligations of the family which means children will be the ones who are responsible to take care of and support the parents and protect and continuing the unique customs of the family.

    On the other hand;

    Taoism believes that woman represents Yin and man represents Yang, and the Tao means the path to harmony will achieve when a woman and a man get into a relationship and commit together as one, the Yin chi will accept by the man and Yang chi will receive by the woman, then both Yin and Yang will combine into one and balanced. Some people relate the word Tao with marriage by saying that marriage is the Tao to future means the way or path towards the future because the life after marriage is like a new life and through marriage, babies are born: babies are the hope and creation for the future.

    At the same time, since Taoism emphasizes on balancing and harmony of nature, it also laid stress on the harmony of the relationships between people, especially for husband and wife. Thus, husband and wife should avoid confrontations and serious conflicts. Prevention of confrontations or even conflicts can make by calm, love, caring, respect, acceptance, humility, communication, emotional control, self-awareness, self-reflection, sacrifice, and support and understanding of each other.

    On the whole, Taoism vs Buddhism are religions that guide people on how to live a good life and teach the important values of life. These two religions have some similarities and sometimes these similarities may even cause people to mistake that both of them are the same religion or some may mistake the beliefs and practices of Buddhism and the beliefs and practices of Taoism.

    Death and life cycle;

    There are similar beliefs between these two religions such as both of their belief in the life after death and life cycle never end and also Taoism vs Buddhism have a similar ultimate goal, but their objective principles, their understanding, beliefs, and interpretation about the life after death, their perspectives and methods to deal with the problems especially health problems in life, and their point of view and practices in relationships and marriage are very different. Taoism and Buddhism have their unique and different way of thinking and interpretation of life.

    Taoism vs Buddhism Differences or Distinction Religion Essay Image
    Taoism vs Buddhism Differences or Distinction Religion Essay; Image by Sasin Tipchai from Pixabay.
  • Biometric Authentication Methods Information Technology Essay

    Biometric Authentication Methods Information Technology Essay

    Biometric Authentication Methods Introduction Robustness, Types, Futures and Scopes in Information Technology Essay; The world is advancing with the technology, and as technology will advance, security too needs to advance and hence will play a crucial role. When we think about information security, authentication will play a crucial role in it. Numerous systems make use of biometric authentication methods such as tablets, mobile phones, and laptops. The authentication may be biometric, which may be our fingerprints, facial recognition, iris scan, or any physiological parameters.

    Here is the article to explain, Biometric Authentication Methods Robustness, Types, Futures and Scopes in Information Technology Essay!

    In this articles, we will provide a brief introduction about biometrics, types of biometrics, their robustness, and the future and scope of biometrics.

    Introduction to Biometric Authentication;

    The assurance of confidentiality, integrity, and availability is the primary concern when we think about information security. When we are talking about security, authentication will play a crucial role, and so biometrics come into play. What is biometric authentication methods? Biometrics may be any physiological parameter that can use to authenticate and establish a one-to-one correspondence between an individual and a piece of data. Best define of Data Visualization and Information Visualization; Biometrics provides a soft flush of confidence and security for authentication. Mobile phones use fingerprints of facial recognition to unlock, or some security doors may use an iris scan to let an individual entry.

    “According to a recent Ping identity survey, 92% of enterprises rank biometrics as an effective to a very effective way to secure identity for the data stored”.

    All the biometrics works in a similar manner, which includes a scanner, computer, and software. The scanner will scan the physiological feature and will detect the required parameter and will send it to the computer. The computer will have sophisticated software that may be dependent on pattern matching software, which will generate a code. That code will be first taken as input and later will used for authentication purposes. Usually, multiple samples taken to improve efficiency.

    Robustness;

    Robustness is the property of being strong and healthy in constitution. When it transposed into a system, it refers to the ability of tolerating perturbations that might affect the system’s functional body. In the same line robustness can define as “the ability of a system to resist change without adapting its initial stable configuration”. “Robustness in the small” refers to situations wherein perturbations are small in magnitude, which considers that the “small” magnitude hypothesis can be difficult to verify because “small” or “large” depends on the specific problem. Conversely, “Robustness in the large problem” refers to situations wherein no assumptions can made about the magnitude of perturbations, which can either be small or large. It has been discussed that robustness has two dimensions: resistance and avoidance.

    Face Biometric Authentication in Information Technology Essay Image
    Face Biometric Authentication in Information Technology Essay; Image by teguhjati pras from Pixabay.

    Factors of Robustness;

    For considering factors of robustness, consider three inputs as sample input (input1), a correct input that matches the sample input(input 2), and a wrong input that does not match the sample input (input 3).

    • False Accept Rate (FAR): The probability of a system that claims that the system has a successful match between the input one and input 3.
    • False Reject Rate (FRR): The probability of a system that claims that the system has an unsuccessful match between input two and input 3.
    • Relative Operating Characteristics (ROC): A graph plotted between FRR and FAR this showing the characteristics.
    • Equal Error Rate (EER): This is the rate when FAR is equal to FRR. ROC helps to show clearly how FAR, and FRR changed; the lower the EER, the better and accurate a system is.
    • Failure to Enroll Rate (FER): The percentage of data that fails to input into the system.
    • Failure to Capture Rate (FTC): The percentage when systems fail to detect biometric characteristics.
    Results of Robustness of each authentication;

    The following were the results of the various biometric authentication methods using the above parameters.

    Part 01;
    • Fingerprints: The fingerprint could not detect the impression correctly due to the moisture between the finger and sensor.
    • Iris Scan: The false analogy of the iris is virtually impossible because of its distinct properties. The iris closely associate with the human brain and said to be one of the first parts to disintegrate after death.
    • Retina Scan: The main drawback of the retina scan is its impulsiveness. The method of obtaining a retina scan is personally nosy. Laser light must conduct through the cornea of the edge. Also, the transaction of a retina scanner is not secure. An adept operator require, and the person being scanned has to follow his/her direction.
    • Palm Vein Recognition: Its position to use is that the hand must place accurately, governed marking have been incorporated, and units seated so that they are at a comfortable height for most of us.
    • Ear Recognition: This method has not achieved an exceptional level of security yet. It is simple, and recognizable features of the ear cannot provide a strong establishment of individual identity.
    Part 02;
    • Voice Recognition: Even though this method does not require any specialized or lavish hardware and can used via a phone line, but the background noises cause a significant problem that shrinks its accuracy.
    • Facial Recognition: The accuracy of this method is expanding with technology, but it is yet not very astonishing. The current software does not find the face as ‘face’ at an appropriate place, which can make the result worse. The problems with this technology can create problems when there are distinct twins or any significant changes in hair or beard style.
    • Signatures: A person does not make a signature persistently the same way. So, the data achieved from the signature of a person has to allow for quite some variability. Most of the signature dynamics pattern verifies the dynamic only. They do not wage consideration to the resulting signature.
    • DNA: The environment and management can affect measurements. The systems are not precise and require integration or further hardware, and also they cannot be rest once compromised.

    Types of Biometric Authentication Methods;

    There are many types of biometric authentication methods, which may fingerprints, physiological recognition, signatures, or DNA.

    Fingerprints;

    The way a digital fingerprint biometric may work is the transient way of the old traditional method of fingerprint authentication in which we were required to create a fingerprint impression using a colored ink on a document that was later sent to a fingerprint scanner and used for authentication. In the present, it works digitally, where a scanner uses a light-sensitive microchip to yield and sends it to the computer. The computer will use sophisticated pattern-matching software, which will generate a code that will be first used as input and later for authentication purposes.

    Physiological recognition;

    The subsections below suggest an apprised overview of mostly used physiological characteristics for the automated recognition of a particular person.

    Iris Scan;

    Iris scan depends on the patterns in the colored part of our iris. They patterns are very distinct and obtained from a video-based acquisition system. Iris Scan biometric works in a similar manner as other biometrics. A high-resolution grayscale camera takes an image of the eye 10-40cm away, which is then processed through a computer. The computer runs on a sophisticated pattern-matching software which generates a code and thus uses for authentication.

    Retina Scan;

    Retina Scan is very similar to Iris Scan. The whole process which goes on for iris scan, retina scan follows the same. The only difference is that while the image of the eye is being taken, infrared light pass onto it as retina lies at the rear of our pupil. The camera captures the pattern of blood vessels behind the eye. These patterns are distinctive. The image thus obtained goes through a sophisticated pattern-matching software which generates a code and thus uses for authentication purposes.

    Palm Vein Recognition;

    Palm vein recognition does not work on the palm just by itself; rather, it depends on the geometry of the arrangement of our vein. Palm vein biometric works in a similar manner as fingerprints and retina scans. The scanner uses infrared light and a microchip the detect vein patterns. The patterns thus obtained go through a sophisticated pattern-matching software, which thus generates a code and uses for authentication.

    Ear Recognition;

    This recognition works in a similar manner as an iris scan. An ear has distinctive marking and patterns which may be complex to understand. A high grayscale camera captures the image of the ear 10-40cm away. This image then gets transfers to the computer, which runs on the sophisticated software that depends on pattern matching software, which generates a code and uses for authentication. Such a type of software was firstly produced by French company ART techniques. This recognition mainly use in law enforcement applications like crime scenes and is still in progress of getting better.

    Voice Recognition;

    Voice recognition does not depend on the pronunciation of speech itself; rather, it depends on the vocal tract, mouth, and nasal cavities, and other speech refining sources of the human body. This biometric uses the acoustics visage of speech, which is distinctive. The speech thus obtained from the recorder gets transferred to the computer. The computer then runs through a sophisticated pattern-matching software and generates code which use for authentication.

    Facial Recognition;

    Facial Recognition Does not depend on the face by itself; rather, it depends on the distinctive facial features like the positioning of eyes, nose, mouth, and distances between them. A high-resolution camera takes an image of the face, which then resized to a pre-defined sized template, which may range between 3-5KB. The template thus obtained gets transferred to the computer, which later runs on sophisticated pattern-matching software and generates the code.

    Signatures;

    Signature authentication does not depend on the signature itself rather than gesture while making a signature. The gesture measure by the pressure, direction, acceleration, dimensions, and direction of the strokes. The most significant advantage of the signatures is that it cannot stolen by any fraudster by just looking at how it was previously written. The information about gestures thus obtained runs through a sophisticated pattern-matching software on a computer, which thus generates a code.

    DNA;

    DNA sampling requires a form of blood, tissue, or other bodily shaped. Their biometric is invasive at present and still has to defined as the analysis of DNA takes 15-20 minutes. DNA sampling could not matched with real-time witch current technology, but later, when technology advances, DNA sampling may become more significant.

    Futures and Scope of biometric authentication methods;

    Following are the approaches by which we can resolve the issues of these biometric authentications:

    Part 01;
    • Fingerprints: A fingernail plate can used, which segregates features on the surface of the fingernail plate with more precision.
    • Iris Scan: Various papers have been suggested with more developments on the veracity of iris scanning for the authentication mode in which a three-dimensional camera primarily prefer for this principle.
    • Retina Scan: We can use a steep resolution sensor for capturing more precise images of blood vessel samples.
    • Palm Vein Recognition: We can facilitate the sensor device in order to reduce the overall cost of feature eradication of an individual’s palm vein.
    • Ear Recognition: We can put some extra effort into pattern recognition in order to increase its complexity.
    Part 02;
    • Voice Recognition: If we develop an excellent combination of artificial intelligence and current voice recognition, it will be a massive profit for biometrics.
    • Facial Recognition: We can use a three-dimensional camera for data collection. We can also use more precise sensors to capture images of face skin, which looks for the peculiar features in a user’s face skin such as visual spots, lines, or birthmarks.
    • Signatures: If we combine current digital signatures with other methods of verification, signatures, too, will have more potential to cut down fraud and identify fraud by adding more layers of security to the biometric.
    • DNA: At the moment, time taken to perform a DNA test is usually 15-20 minutes. If we try to integrate the DNA analyzer and combine it with other methods of biometrics, it will become a very secure way for authentication.

    Conclusion;

    Biometric Authentication has an excellent scope for private, public, and government agencies. Although the reality is that biometrics is the future of the security industry and it is quickly becoming more recognized as the most accurate identification in today’s world. However, it is easy to beat the current generation of biometrics if they used solely. However, if we combine biometrics with new technology or combine different biometrics, it will be advantageous to add/increase the accuracy of the current generation of biometrics. Biometrics products will become more flexible and capable of serving different purposes, thus accomplishing more than just authentication.

    Biometric Authentication Methods Information Technology Essay Image
    Biometric Authentication Methods Information Technology Essay; Image by ar130405 from Pixabay.
  • Process Reengineering Examples Meaning Definition Steps 2400

    Process Reengineering Examples Meaning Definition Steps 2400

    What does mean Process Reengineering with their Examples, Meaning, Definition and Steps involved 2400 words Essay; It isn’t just a change, but actually, it’s a dramatic change and dramatic improvements; this can only achieve through overhauling the organization structures, job descriptions, performance management, training, and therefore the most significant, the utilization of IT i.e. Information Technology. it’s the act of adjusting an organization’s major functions to extend efficiency, improve product quality, and/or decrease costs. This starts with an in-depth analysis of the business workflows and identifying key areas that require improvement; those that do that quite work are often remarked as PR specialists hire by companies to facilitate transitions to more standardized processes.

    Here is the article to explain, Process Reengineering Examples, Meaning, Definition and Steps involved 2400 words Essay!

    Your agency is making first-rate progress. You’re assembly goals without problems, but the way you meet desires is wherein the problem is. Business strategies play a vital position in driving dreams; but, they’re now not as efficient as you’d like them to be. Making modifications to the procedure gets increasingly hard as your business grows due to conduct and investments in old techniques. But in reality, you can’t enhance procedures without making changes.

    Process Benchmarking, Processes ought to be reengineered carefully because experiments and errors carry in lots of confusion. In Process Re-engineering, corporations start with a blank sheet of paper and rethink existing procedures to deliver more fees to the patron. They usually undertake a brand new value gadget that locations increased emphasis on customer wishes. Companies reduce organizational layers and remove unproductive activities in two key areas. First, they redecorate practical companies into go-useful groups. Second, they use the era to improve statistics dissemination and choice-making.

    It’s important to differentiate between this and enterprise process development; which makes it a specialty of clearly updating an employer’s modern-day procedures. PR, however, has ambitions to make fundamental adjustments to the whole scope of a business’ structures. PR specialists exist in all sorts of industries, so the specific everyday obligations will vary from process to activity. Below are you’ll study process reengineering examples, their history, meaning and definition, and also the Steps involved.

    History of Reengineering;

    The concept of reengineering started in the nineteen nineties when the Massachusetts Institute of Technology (MIT) conducted research entitled “Management in 1990”. The sole purpose of that research was to know the role played by information technology organizations during that time. Since that time a lot of research has been done on reengineering and different authors have different views on reengineering; which has raised a lot of controversy and disagreement among authors.

    There are some like Druker who believed that inputs from new and innovative concepts should be used to optimize the productivity of all the operations; while there are some who believed that reengineering is a misconception and will soon disappear. In the 1880s Frederick Taylor suggested that process reengineering was used by managers to discover; the best process (way) to perform the work thereby increasing the productivity of the whole operation.

    In the early 1900’s Henri Fayol originated the concept of reengineering and explained; it as a concept to derive optimum advantage from all the available resources by finding the best process to perform the work. During the time of Taylor and Fayol, technology was a bit of a constraint; and it was really difficult for the large companies to design the process in a cross-functional or cross-departmental manner.

    Meaning and Definition of Reengineering;

    The most popular meaning and definition of reengineering is;

    “the fundamental rethinking and radical redesign of business processes to achieve dramatic improvements in critical, contemporary measures of performance, such as cost, quality, service, and speed.”

    This definition of reengineering includes four essential points which can summarize as follows:

    Fundamental Rethinking:

    While doing reengineering, business officials must ask them basic questions about their business like “what is their business”? “What are they do they want to do in their business”? And “what do they want to change?” Asking these questions brings clarity about the business operations and forces the people to look at the tactic rules; which their organization follows for doing business. Reengineering works in two steps. First, it determines what the company must do for the improvement and second, how they have to do it. Reengineering accepts nothing as it is, it ignores what is it like and concentrates on what it should be like.

    Radical Redesign:

    The radical redesign means to get into the details of the things and not to make superficial arrangements for the things which are already in the place; but, to get into the roots of the things and look for new innovative and efficient ways to do the same thing more productively.

    Dramatic improvements:

    It is often said that business reengineering is about business reinvention and not about business enhancement, business improvement, or business modification. Hence, Reengineering is not about making small improvements but about making big, efficient, and noticeable changes to achieve quantum leaps in performance. Marginal improvement requires the fine-tuning of the operations but reengineering should only buy-in when there is a need for big changes for dramatic improvements Reengineering should buy in only when there is a need for drastic improvements which include changing the old things with the new things.

    Processes:

    The word “process” is central to reengineering and it gives hard time to most of the managers of the organizations; because most of the managers job-Oriente rather than process-oriented. Job-oriented managers focus mainly on the job (task) at hand rather than the process involved in the job. Business processes are the collection of activities that takes in inputs of different kinds and create the output; which is of value for the client or the customer of the organization. Reengineering not only focuses on the different departments of the organization; but also the organization as a whole because of which reengineering sees the full picture of the work moving from one department of the organization to the another with keeping an eye on the operational hindrances on the way.

    Steps involved in Process Reengineering;

    Process reengineering methodology mainly includes the following steps below are;

    Planning for reengineering:

    Planning and preparation play a vital role for any process or event to be successful and the same applies to reengineering. Since reengineering involves major changes and is not for small improvements; and also it may include heavy costs, there should be a dire need for reengineering. This step starts with the consensus of executives of the firm for the process of reengineering. During reengineering the processes are reengineers in such a way that they actively work in tandem with the mission & vision statements of the firm. Understanding customer expectations is most important because the processes need to be re-engineered in a way that will lead to the maximization of customer satisfaction.

    Mapping and Analysing As-Is Process:

    Before reengineering any process, the reengineering team should know the existing process. The underline aspect of business process reengineering is to bring the changes drastically. Process reengineering is not for small and slow changes. Many people advocate that it should be a “To-Be” Analysis instead of an “As-If” Analysis. The usefulness of this step is in identifying anything that prevents the process from achieving desired results and in particular information transfer between organizations or people and value-adding processes and implemented by using different models used for the creation and documentation of activity and process models. Then with the use of activity-based costing amount of time and amount of cost consumed by each activity calculate.

    Designing To-Be process:

    This phase starts with looking for alternatives in the current situation; which molds well with the strategic goals of the organization. The first part of this phase begins with benchmarking which is a comparison of the firm with the other firms in the same industry. It’s a general practice to select industry leaders for the comparison so that the firm can use its best practices. This is not necessary to select the firm for comparison from the same industry; one can choose any firm from any industry with similar processes.

    For example, both textile mills and food processing industries use Reverse Osmosis technology (process) for water purification; hence they can compare for the water treatment process. Next, we do activity-based costing analysis for analyzing the time and costs involved for different processes. Once ABC analysis is done To-Be models prepare using different modeling techniques. This is important to know that this modeling is an iterative process and different To-Be models prepare for the analysis. At last, we make a trade-off matrix to select the best To-Be scenario.

    Implementing Reengineered Process:

    The implementation phase is that phase where reengineering encounter maximum resistance. It is because the environment is not readily changeable and hence; it is the most difficult phase of all the phases in reengineering. As the firm invests a lot of time and incurs heavy expenses for the planning phase; it is justifiable to invest in training programs for the employees of the firm for the cultural change. Winning the hearts of all the employees and motivating them is crucial for process reengineering.

    The next step is to make a transition plan to move from As-Is to the redesign process. The plan should choose in a fashion that goes well with the long-term strategy of the firm. Implementation of information technology that supports reengineering is a must for the process. The total amount of work that needs to be done for the reengineering broke down into different components using work break down structure techniques and these individual components are worked upon.

    Improve Process Continuously:

    The last but most important phase of any reengineering process is continuous monitoring of processes and the results which come from modified/improved processes. If there are deviations from the expected to actual then they should take care of immediately. The performance of reengineering measures the competitive advantage the firm gain by reengineering; the amount of satisfaction of the employees, and the amount of commitment management shows. Next, you know and understand why we need process reengineering in business from a few examples.

    Process Reengineering Examples:

    The ascendant popularity of business process reengineering since the 1990s means there are many business process reengineering examples. The most illustrative and frequently cited model is that of Ford Motor Company. In the 1990s, Ford began to use business process reengineering to make itself more competitive against global competitors such as Toyota, Honda, and Mazda.

    In comparing operations to their more efficient Japanese competitors; Ford noticed they were employing a hugely outsized number of people in their Accounts Payable division: 500 in comparison to Mazda’s 100. Also, Ford used business process reengineering to understand and solve the problem of this overstaffing. Ford found that every time their purchasing department wrote an order for purchase.

    A series of processes were triggered that required accounts payable to do not one but three things process the order from the purchasing department; process the copy of the order sent by the material control department, and process the copy of the receipt sent by the vendor. All of this took place before the accounts payable clerk could match the three orders and finally issue a payment. As part of the business process reengineering, Ford used digital technology to redesign the process and eliminate the inefficiency.

    Ford process reengineering examples;

    In the 1980s, many expanding American companies were looking for ways to cut down on administrative and overhead costs. Ford was no different. When ford started looking for things that could improve in the organization; they have spotted that their account payable department currently employs 500 people.

    When Ford looked at their smaller competitor, Mazda, they were astounded to find out that their accounts payable department consisted of 5 workers. This meant if Ford implemented a similar technology, the company could reduce the number of workers to 100.

    To understand, how they can make the department more efficient, Ford analyzed the old process:

    • Once the purchasing department writes a purchase order, they have to send a copy to accounts payable manually.
    • Then, the person responsible for resource allocation would receive the goods, and send a copy of the related document to accounts payable.
    • Afterward, the vendor would send a receipt for the goods to accounts payable.
    • The old process involved 3 distinct human interactions that required approvals, which had to obtained manually.

    Ford decided to implement the innovative (at that tie) strategy of using computer software and databases to store and transfer information automatically. When done digitally, accounts payable processing becomes quicker and reduces the number of workers involved. The redesigned process worked like this:

    • Purchasing office issues an order and inputs it into an online database.
    • The resource manager receives the goods and checks if the order matches the information in the database
    • If there’s a match, material control accepts the order on the computer.
    The company selling commemorative cards Examples;

    In a company that offers products such as Christmas, anniversary, commemorative cards, etc., renewing the stock and changing the design of the cards is constantly fundamental. On average, it takes three months for new items to reach the shelves. Across market research, it’s possible to realize that there would ideally be new products every month.

    At first glance, it’s easy to say that the delay was at the production stage. When analyzing and mapping the process, it’s verified that the creation stage was the most time-consuming. Oftentimes the creative team receives the concept and several employees begin to perform the same task (duplicate actions), or an idea takes days to get off the paper. With this information, we can redesign the process completely, defining a cross-functional team from concept and creation, with incredible results in speed, costs, and effectiveness.

    Cereal products Examples;

    The process of transforming food into cereal products begins on the farm with the harvest. This follows by primary processing, packing, and transportation to the processing plants (depending on the grain). This large company analyzed its process and discovered a serious logistical problem. It lost almost 20% of the grains harvested during transportation from farms to the factories, located near the biggest consumption centers, due to the precariousness of the roads.

    After a study, this Business Process Reengineering case concluded that it would be more profitable to move the factories nearer to the farms. Afterward, they transport final products to large centers with much fewer losses. The old factory sheds were transformed into distribution centers, helping to reduce the impact of the initial investment; they already had haddocks and other ready-made logistics infrastructure. The above examples of process reengineering are a very simple explanation, you may try our best Studybay Essay writer.

    Process Reengineering Examples Meaning Definition and Steps involved 2400 words Essay Image
    Process Reengineering Examples, Meaning, Definition and Steps involved 2400 words Essay; Image by Mohamed Hassan from Pixabay.
  • Duty of Care Law English and Irish Approaches 2000 Essay

    Duty of Care Law English and Irish Approaches 2000 Essay

    Duty of Care Law difference between English and Irish Approaches 2000 words Essay; The duty of care arises in the tort of negligence, a relatively recently emerged tort. The general principle is that you should not harm those people to whom you owe a duty of care law by your acts of omission. If you fail in the standard of care owed; you will be liable for your acts or omissions due to negligence.

    Here is the article to explain, the difference between English and Irish Approaches in Duty of Care Law 2000 words Essay!

    The questions arise as to who duty owed and more significantly as to the standard to the duty owed. In Ireland, a duty is generally owed to any person who can class as your neighbor; which involves issues of proximity, foreseeability, and policy considerations. Differences exist in Irish and English law in terms of who owed a duty of care. As regards the standard that owes, it is that of the “reasonable person”. The cornerstone of the duty of care principle existed expanded based on the “neighbor principle” by Lord Atkins in Donoghue v Stevenson. [1932] AC 562.

    The case involved a woman who had suffered shock and gastroenteritis upon the consumption of a bottle of ginger ale. The shock and gastroenteritis resulted from a decomposed snail at the bottom of the bottle. The plaintiff had no action against the shop owner, as he had not been negligent in any way. The question was whether she could take an action against the manufacturer of the ginger ale. The court rules in her favor, finding that a duty of care existed owed to your ‘neighbor’. Lord Atkins stated that:

    “The rule that you are to love your becomes in law you must not injure your neighbor and the lawyer’s question who is my neighbor? receives a restricted reply. You must take reasonable care to avoid acts or omissions which you can reasonably foresee would be liable to injure your neighbor. Who then in law, is my neighbor? The answer seems to be persons who are so closely and directly affected by my act that I ought reasonably to have them in contemplation as being so affected when I am directing my mind to acts or omission which call into question.”

    The English Approach;

    This duty of care law mentioned above was later endorsed in Anns v Merton London District Council [1978] AC 728. The facts of this case were the plaintiffs were leasees of flats in Wimbledon. The borough of Merton approved a set of plans to build a block of flats. Eight years after the building was complete and the flats stood rented the foundation started to deteriorate. The tenants brought an action against the city for the cost of the repairs. The plaintiffs sued the local authority because their predecessor’s inspectors had either not inspected the foundations or, if they had, had done so negligently. The House of Lords held that the local authority owed the plaintiff a duty of care.

    It was in this case that lord Wilberforce established a two-stage test:

    • First one has to ask, as between the alleged wrongdoer and the person who has suffered damage; there is a sufficient relationship of proximity or neighborhood such that, in the reasonable contemplation of the former; carelessness on his part may be likely to cause damage to the latter- in which case a prima facie duty of care rises.
    • Secondly, if the first question answer affirmatively, it is necessary to consider whether there are any considerations; which ought to be negative or to reduce or limit the scope of the duty or the class of person to whom; it owed or the damages to which a breach of it may give rise.
    More Approach;

    In subsequent cases in England, this ruling was initially approved, but rejected in Murphy v Brentwood District Council [1991] 1 AC 398, as it lacked precision and created a duty of care of general application. In this case, the defendant Brentwood District Council failed to inspect the foundations of a building adequately; with the result that the building became dangerously unstable. The claimant, being unable to raise the money for repairs, had to sell that house at a considerable loss; which he sought to recover from the district council.

    The plaintiff’s actions failed and it stood held that the defendants did not owe a duty of care to the purchasers. As a result, in England, the law has developed certain categories of negligence, as suggested by Lord Bridge in Caparo Industries Plc v Dickman [1990] AC 605 where he stated the law should be allowed to develop on an incremental basis rather than along the broad lines it had been followed since Anns.

    Lord Bridge referred favorably to the Australian High Court decision of Sutherland Shire Council v Heyman where Brennan J had suggested that:

    “it is preferably in my view that the law should develop novel categories of negligence incrementally by analogy with established categories; rather than a massive extension of a prima facie duty of care restrained only by indefinable considerations; which ought to negative or to reduce or limit the scope of the duty and the class of person to whom it should owed”.

    Ultimately the court in rejecting the earlier tests laid down; their three-step test required foreseeability, proximity, and the imposition of a duty that would be “just and reasonable”. This third criterion would essentially allow the courts to restrict the unfettered expansion of the duty of care to new situations.

    The Irish Approach;

    Until recently, the approach of both the Donoghue and Anns stood accepted by the Irish Courts; whose approach involved an examination of the issues of proximity and foreseeability; and any policy considerations that would limit negate the scope and the duty of care law. In Ward v McMaster, Louth Co. Council and Nicholas Hardy & Co. Ltd. [1985] IR 29, it stood held at the duty of care arose from the proximity of the parties; and the foreseeability of the damage, balance against “ absence of any compelling exemption based on public policy”.

    However, recent decisions of the Supreme Court discussed below, indicate a retreat from; this approach and adoption of the English approach. In the abovementioned Ward case, the plaintiff had purchased a house with the aid of a local authority housing grant. He later learned that the house was severely substandard and structurally unsound. He subsequently brought an action against the builder, the local authority, and the value of the local authority.

    The local authority existed required by law to value the house before issuing the housing grant. They did so and their valuer found no defects. However, the valuer did not have any construction knowledge and existed therefore not held liable. He was an auctioneer and had never put himself forward as competent to value the house. The local authority, however, existed found to be negligent; as it had failed to engage a person competent to carry out the investigation. The local authority maintained that it failed in duty, not to the plaintiffs; but, to the public whose rates and taxes went into funding the local authority.

    More Approach Part 01;

    The court rejected this holding that there was proximity between the parties. It held that it was foreseeable that the plaintiff would rely on the local authority’s valuation. The fact that the plaintiff had applied for a housing grant was proof that he was not wealthy; and, would therefore have been unlikely to carry out a separate valuation in particular; the court heard that the failure of the local authority to warn the plaintiff not to rely on its valuation was relevant in finding it liable.

    The builder existed also found to be reliable on the law since including Donohue v Stevenson. The Supreme Court ruled that the duty owed would be to avoid foreseeable harm; and also to avoid any financial harm that might arise from having to repair defects in the house. This ruling changed the common law position that a builder could not be liable in such a case. McCarthy J stated that the duty arose “from the proximity of the parties, the foreseeable of the damage and the absence of the compelling reason bases upon public policy”.

    More Approach Part 02;

    In McNamara v ESB [1975] IR 1, a young boy sustained when he broke into an ESB substation. The substation stood surrounded by a fence which stood being replaced by a wall. The accident occurred at a spot where there was wire meshing. There stood easily reachable un-insulated conductors at the ESB station and for this reason; the ESB had placed barb wire above the mesh fencing to prevent intruders from entering the site. The ESB also knew at the time that children were entering the substation.

    The temporary fence stood severely criticized both by an architect and an engineer hired as experts by the plaintiff. The court found the ESB liable based on proximity and foreseeability. Also, The court did consider the steps taken by the ESB to prevent entry and decided that there were unreasonable circumstances. The court considered whether the children could also be liable. It concluded that they were not, as they did not appreciate that; there was a danger and this danger had not been communicated to them.

    More Approach Part 03;

    The recent Supreme Court judgment of Glencar Exploration plc and Andaman Resources plc v Mayo County Council [2002] 1 I.R. 84 demonstrates a retreat from the traditional stance of the Irish courts, bringing Irish law into line with English law. This judgment stood followed in Fletcher v Commissioners of Public Works in Ireland, Supreme Court, unreported, 21 February 2003. The plaintiffs in the Glencar case had been granted ten licenses by the Minister for Energy to explore for gold in the Westport area and had invested heavily in such mining over 24 years between 1968 and 1992.

    In 1991, they set up a joint venture with an Australian company, Newcrest Mining Limited. However, this joint venture collapsed following the introduction of a mining ban by Mayo County Council according to its 1992 draft county plan. The plaintiffs successfully challenged the mining ban in a judicial review proceeding in the High Court. They subsequently sought to recover damages from Mayo County Council for breach of duty in an action before the High Court, which dismissed the claim. The reason behind the dismissal was that although Mayo County Council had been negligent in adopting the mining ban, according to Kelly J, this negligence did not give rise to any right to damages.

    More Approach Part 04;

    The High Court decision stood appealed to the Supreme Court, which again dismissed the action. Keane CJ dealt with the duty of care and the neighbor principle at length. He questioned whether the two-step test of Anns was the correct test to follow in this jurisdiction and reinterpreted the decision of the Ward case. He stated that:

    “There is, in my view, no reason why courts determining whether a duty of care arises should consider themselves obliged to hold that it does in every case where injury or damage to property was reasonably foreseeable; and the notoriously difficult and elusive test of ‘proximity’ or ‘neighborhood’ can said to have been met unless very powerful public policy considerations dictate otherwise. It seems to me that no injustice will be done if they require to take the further step of considering whether, in all the circumstances; it is just and reasonable that the law should impose a duty of a given scope on the defendant for the benefit of the plaintiff … ”

    The Glencar judgment involves an additional third step to the Anns’ two-step test. The question must ask as to whether it is just and reasonable to impose a duty of care. Arguably, this may be no different than the policy considerations inherent in the two-step test. However, it adds the third hurdle for litigants to overcome. The Glencar judgment is in line with the approach favored by the English courts.

    Duty of Care Law difference between English and Irish Approaches 2000 words Essay Image
    Duty of Care Law difference between English and Irish Approaches 2000 words Essay; Image by LEANDRO AGUILAR from Pixabay.

    References; The duty of care. Retrieved from https://www.lawteacher.net/free-law-essays/tort-law/the-duty-of-care.php?vref=1