Tag: Information

  • Different Kind of Security Attacks on RFID Systems

    Different Kind of Security Attacks on RFID Systems

    Different Kind of Security Attacks on RFID Systems


    RFID systems are vulnerable to attack and can be compromised at various stages. Generally the attacks against a RFID system can be categorized into four major groups: attacks on authenticity, attacks on integrity, attacks on confidentiality, and attacks on availability. Besides being vulnerable to common attacks such as eavesdropping, man-in-the-middle and denial of service, RFID technology is, in particular, susceptible to spoof and power attacks.

    Meaning of RFID: “Radio-frequency identification (RFID) uses electromagnetic fields to automatically identify and track tags attached to objects. The tags contain electronically stored information. Passive tags collect energy from a nearby RFID reader’s interrogating radio waves. Active tags have a local power source such as a battery and may operate at hundreds of meters from the RFID reader. Unlike a barcode, the tag need not be within the line of sight of the reader, so it may be embedded in the tracked object. RFID is one method for Automatic Identification and Data Capture (AIDC).”

    This section illustrates the different kinds of attacks on RFID systems.

    Eavesdropping: Since an RFID tag is a wireless device that emits a unique identifier upon interrogation by a RFID reader, there exists a risk that the communication between tag and reader can be eavesdropped. Eavesdropping occurs when an attacker intercepts data with any compliant reader for the correct tag family and frequency while a tag is being read by an authorized RFID reader. Since most RFID systems use clear text communication due to tag memory capacity or cost, eavesdropping is a simple but efficient means for the attacker to obtain information on the collected tag data. The information picked up during the attack can have serious implications – used later in other attacks against the RFID system.

    Man-in-the-Middle Attack: Depending on the system configuration, a man-in-the-middle attack is possible while the data is in transit from one component to another. An attacker can interrupt the communication path and manipulate the information back and forth between RFID components. This is a real-time threat. The attack will reveal the information before the intended device receives it and can change the information en route. Even if it received some invalid data, the system being attacked might assume the problem was caused by network errors, but would not recognize that an attack occurred. An RFID system is particularly vulnerable to Man-in-the Middle attacks because the tags are small in size and low in price.

    Denial of Service: Denial of Service (DOS) attacks can take different forms to attack the RFID tag, the network, or the back-end to defeat the system. The purpose is not to steal or modify information, but to disable the RFID system so that it cannot be used. When talking about DOS attacks on wireless networks, the first concern is on physical layer attacks, such as jamming and interference. Jamming with noise signals can reduce the throughput of the network and ruin network connectivity to result in overall supply chain failure. A device that actively broadcasts radio signals can block and disrupt the operation of any nearby RFID readers. Interference with other radio transmitters is another possibility to prevent a reader from discovering and polling tags.

    Spoofing: In the context of RFID technology, spoofing is an activity whereby a forged tag masquerades as a valid tag and thereby gains an illegitimate advantage. Tag cloning is a kind of spoofing attack that captures the data from a valid tag, and then creates a copy of the captured sample with a blank tag.

    Replay Attack: In replay attack, an attacker intercepts communication between a RFID reader and a tag to capture a valid RFID signal. At a later time, this recorded signal is re-entered into the system when the attacker receives a query from the reader. Since the data appears valid, it will be accepted by the system.

    Virus: If a RFID tag is infected with a computer virus, this particular RFID virus could use SQL injection to attack the backend servers and eventually bring an entire RFID system down.

    Power Analysis: Power analysis is a form of side-channel attack, which intends to crack passwords through analyzing the changes of power consumption of a device. It has been proven that the power consumption patterns are different when the tag received correct and incorrect password bits.

    Impersonation: An adversary can query to a tag and a reader in RFID systems. By this property, one can impersonate the target tag or the legitimate reader. When a target tag communicates with a legitimate reader, an adversary can collect the messages being sent to the reader from the tag. With the message, the adversary makes a clone tag in which information of a target tag is stored. When the legitimate reader sends a query, the clone tag can reply the message in response, using the information of a target tag. Then the legitimate reader may consider the clone tag as a legitimate one.

    Information Leakage: If RFID systems are used widely, users will have various tagged objects. Some of objects such as expensive products and medicine store quite personal and sensitive information that the user does not want anyone to know. When tagged objects received a query from readers, the tags only emit its Electronic Product Code (EPC) to readers without checking legitimacy of readers. Therefore, if RFID systems are designed to protect the information of tags, user’s information cannot be leaked to malicious readers without an acknowledgment of the user.

    Traceability: When a user has special tagged objects, an adversary can trace user’s movement using messages transmitted by the tags. In the concrete, when a target tag transmits a response to a reader, an adversary can record the transmitted message and is able to establish a link between the response and the target tag. As the link is established, the adversary is able to know the user’s movement and obtain location history of the user.

    Tampering: The greatest threat for RFID system is represented by data tampering. The most well-known data tampering attacks control data, and the main defense against it is the control flow monitoring for reaching tamper-evidence. However, tampering with other kinds of data such as user identity data, configuration data, user input data, and decision-making data, is also dangerous. Some solutions were proposed, such as a tamper-evident compiler and micro-architecture collaboration framework to detect memory tampering. A further threat is the tampering with application data, involving mistakes in the production flow, denial of service, incoherence in the information system, and exposure to opponent attacks. This kind of attack is especially dangerous for RFID systems, since one of the main RFID applications is the automatic identification for database real-time updating.

    Different-Kind-of-Security-Attacks-on-RFID-Systems


  • The Different types of Data Mining Functionalities

    The Different types of Data Mining Functionalities

    Data mining functionalities are used to specify the kind of patterns to be found in data mining tasks. The Different types of Data Mining Functionalities.

    Data mining has an important place in today’s world. It becomes an important research area as there is a huge amount of data available in most of the applications. Data mining functionalities are used to specify the kind of patterns to be found in data mining tasks. Data mining tasks can be classified into two categories: descriptive and predictive. First Descriptive mining – tasks characterize the general properties of the data in the database, and second Predictive mining – tasks perform inference on the current data in order to make predictions. You’ll be studying The Different types of Data Mining Functionalities.

    This huge amount of data must be processed in order to extract useful information and knowledge since they are not explicit. Data Mining is the process of discovering interesting knowledge from a large amount of data. The kinds of patterns that can be discovered depend upon the data mining tasks employed. By and large, there are two types of data mining tasks: descriptive data mining tasks that describe the general properties of the existing data, and predictive data mining tasks that attempt to do predictions based on inference on available data.

    The data mining functionalities and the variety of knowledge they discover are briefly presented in the following list:

    Characterization: It is the summarization of general features of objects in a target class, and produces what is called characteristic rules. The data relevant to a user-specified class are normally retrieved by a database query and run through a summarization module to extract the essence of the data at different levels of abstractions.

    For example, one may wish to characterize the customers of a store who regularly rent more than movies a year. With concept hierarchies on the attributes describing the target class, the attribute-oriented induction method can be used to carry out data summarization. With a data cube containing summarization of data, simple OLAP operations fit the purpose of data characterization.

    Discrimination: Data discrimination produces what are called discriminant rules and is basically the comparison of the general features of objects between two classes referred to as the target class and the contrasting class.

    For example, one may wish to compare the general characteristics of the customers who rented more than 30 movies in the last year with those whose rental account is lower than. The techniques used for data discrimination are similar to the techniques used for data characterization with the exception that data discrimination results include comparative measures.

    Association analysis: Association analysis studies the frequency of items occurring together in transactional databases, and based on a threshold called support, identifies the frequent itemsets. Another threshold, confidence, which is the conditional probability that an item appears in a transaction when another item appears, is used to pinpoint association rules. This is commonly used for market basket analysis.

    For example, it could be useful for the manager to know what movies are often rented together or if there is a relationship between renting a certain type of movies and buying popcorn or pop. The discovered association rules are of the form: P→Q [s, c], where P and Q are conjunctions of attribute value-pairs, and s (support) is the probability that P and Q appear together in a transaction and c (confidence) is the conditional probability that Q appears in a transaction when P is present. For example, Rent Type (X,“game”)∧Age(X,“13-19”)→Buys(X,“pop”)[s=2%, =55%] The above rule would indicate that 2% of the transactions considered are of customers aged between 13 and 19 who are renting a game and buying a pop, and that there is a certainty of 55% that teenage customers who rent a game also buy pop.

    Classification: It is the organization of data in given classes. Classification uses given class labels to order the objects in the data collection. Classification approaches normally use a training set where all objects are already associated with known class labels. The classification algorithm learns from the training set and builds a model. The model is used to classify new objects.

    For example, after starting a credit policy, the manager of a store could analyze the customers’ behavior vis-à-vis their credit, and label accordingly the customers who received credits with three possible labels “safe”, “risky” and “very risky”. The classification analysis would generate a model that could be used to either accept or reject credit requests in the future.

    Prediction: Prediction has attracted considerable attention given the potential implications of successful forecasting in a business context. There are two major 50 types of predictions; one can either try to predict some unavailable data values or pending trends or predict a class label for some data. The latter is tied to classification. Once a classification model is built based on a training set, the class label of an object can be foreseen based on the attribute values of the object and the attribute values of the classes. Prediction is, however, more often referred to the forecast of missing numerical values, or increase/ decrease trends in time-related data. The major idea is to use a large number of past values to consider probable future values.

    Clustering: Similar to classification, clustering is the organization of data in classes. However, unlike classification, in clustering, class labels are unknown and it is up to the clustering algorithm to discover acceptable classes. Clustering is also called unsupervised classification because the classification is not dictated by given class labels. There are many clustering approaches all based on the principle of maximizing the similarity between objects in the same class (intra-class similarity) and minimizing the similarity between objects of different classes (inter-class similarity).

    Outlier analysis: Outliers are data elements that cannot be grouped in a given class or cluster. Also known as exceptions or surprises, they are often very important to identify. While outliers can be considered noise and discarded in some applications, they can reveal important knowledge in other domains, and thus can be very significant and their analysis valuable.

    Evolution and deviation analysis: Evolution and deviation analysis pertain to the study of time-related data that changes in time. Evolution analysis models evolutionary trends in data, which consent to characterize, comparing, classifying or clustering of time-related data. Deviation analysis, on the other hand, considers differences between measured values and expected values, and attempts to find the cause of the deviations from the anticipated values.

    It is common that users do not have a clear idea of the kind of patterns they can discover or need to discover from the data at hand. It is therefore important to have a versatile and inclusive data mining system that allows the discovery of different kinds of knowledge and at different levels of abstraction. This also makes interactivity an important attribute of a data mining system.

    Different-types-of-Data-Mining-Functionalities

  • What is Data Mining?

    What is Data Mining?

    What is Data Mining?


    Data mining involves the use of sophisticated data analysis tools to discover previously unknown, valid patterns and relationships in large data sets. These tools can include statistical models, mathematical algorithms, and machine learning methods such as neural networks or decision trees. Consequently, data mining consists of more than collecting and managing data, it also includes analysis and prediction. The objective of data mining is to identify valid, novel, potentially useful, and understandable correlations and patterns in existing data. Finding useful patterns in data is known by different names (e.g., knowledge extraction, information discovery, information harvesting, data archaeology, and data pattern processing).

    The term “data mining” is primarily used by statisticians, database researchers, and the business communities. The term KDD (Knowledge Discovery in Databases) refers to the overall process of discovering useful knowledge from data, where data mining is a particular step in this process. The steps in the KDD process, such as data preparation, data selection, data cleaning, and proper interpretation of the results of the data mining process, ensure that useful knowledge is derived from the data. Data mining is an extension of traditional data analysis and statistical approaches as it incorporates analytical techniques drawn from various disciplines like AI, machine learning, OLAP, data visualization, etc.

    Data Mining covers variety of techniques to identify nuggets of information or decision-making knowledge in bodies of data, and extracting these in such a way that they can be. Put to use in the areas such as decision support, prediction, forecasting and estimation. The data is often voluminous, but as it stands of low value as no direct use can be made of it; it is the hidden information in the data that is really useful. Data mining encompasses a number of different technical approaches, such as clustering, data summarization, learning classification rules, finding dependency net works, analyzing changes, and detecting anomalies. Data mining is the analysis of data and the use of software techniques for finding patterns and regularities in sets of data. The computer is responsible for finding the patterns by identifying the underlying rules and features in the data. It is possible to ‘strike gold’ in unexpected places as the data mining software extracts patterns not previously discernible or so obvious that no-one has noticed them before. In Data Mining, large volumes of data are sifted in an attempt to find something worthwhile.

    Data mining plays a leading role in the every facet of Business. It is one of the ways by which a company can gain competitive advantage. Through application of Data mining, one can tum large volumes of data collected from various front-end systems like Transaction Processing Systems, ERP, and operational CRM into meaningful knowledge.

    “Data mining is the computing process of discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems. It is an interdisciplinary subfield of computer science. The overall goal of the data mining process is to extract information from a data set and transform it into an understandable structure for further use. Aside from the raw analysis step, it involves database and data management aspects, data pre-processing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating. Data mining is the analysis step of the “knowledge discovery in databases” process, or KDD.”

    Data Mining History and Current Advances

    The process of digging through data to discover hidden connections and predict future trends has a long history. Sometimes referred to as “knowledge discovery in databases,” the term “data mining” wasn’t coined until the 1990s. But its foundation comprises three intertwined scientific disciplines: statistics (the numeric study of data relationships), artificial intelligence (human-like intelligence displayed by software and/or machines) and machine learning (algorithms that can learn from data to make predictions). What was old is new again, as data mining technology keeps evolving to keep pace with the limitless potential of big data and affordable computing power.

    Over the last decade, advances in processing power and speed have enabled us to move beyond manual, tedious and time-consuming practices to quick, easy and automated data analysis. The more complex the data sets collected, the more potential there is to uncover relevant insights. Retailers, banks, manufacturers, telecommunications providers and insurers, among others, are using data mining to discover relationships among everything from pricing, promotions and demographics to how the economy, risk, competition and social media are affecting their business models, revenues, operations and customer relationships.

    Who’s using it?

    Data mining is at the heart of analytics efforts across a variety of industries and disciplines.

    Communications: In an overloaded market where competition is tight, the answers are often within your consumer data. Multimedia and telecommunications companies can use analytic models to make sense of mountains of customers data, helping them predict customer behavior and offer highly targeted and relevant campaigns.

    Insurance: With analytic know-how, insurance companies can solve complex problems concerning fraud, compliance, risk management and customer attrition. Companies have used data mining techniques to price products more effectively across business lines and find new ways to offer competitive products to their existing customer base.

    Education: With unified, data-driven views of student progress, educators can predict student performance before they set foot in the classroom – and develop intervention strategies to keep them on course. Data mining helps educators access student data, predict achievement levels and pinpoint students or groups of students in need of extra attention.

    Manufacturing: Aligning supply plans with demand forecasts is essential, as is early detection of problems, quality assurance and investment in brand equity. Manufacturers can predict wear of production assets and anticipate maintenance, which can maximize uptime and keep the production line on schedule.

    Banking: Automated algorithms help banks understand their customer base as well as the billions of transactions at the heart of the financial system. Data mining helps financial services companies get a better view of market risks, detect fraud faster, manage regulatory compliance obligations and get optimal returns on their marketing investments.

    Retail: Large customer databases hold hidden insights that can help you improve customer relationships, optimize marketing campaigns and forecast sales. Through more accurate data models, retail companies can offer more targeted campaigns – and find the offer that makes the biggest impact on the customer.

    What-is-Data-Mining


  • How to Preparation of CAT Exams in 4 or 5 months?

    How to Preparation of CAT Exams in 4 or 5 months?

    How to Preparation of CAT Exams in 4 or 5 months?


    Going by the previous year’s schedule, CAT will be conducted in the first week of December. That leaves roughly 4 or 5 months/150 days for preparation. CAT is the one stop solution for all such students who dream of getting an MBA degree from one of the best management institutes in India. To top it, CAT score is not just accepted by IIMs but by many other good B-schools. Though CAT is not deemed as a tough exam but is tricky to clear.

    Students who have cleared the exam in the past, claim that the exam is not tough but requires careful and strategic planning in order to clear it. Here we will discuss what should be the preparation strategy for students who will begin preparing for CAT exam now.

    Preparation of CAT Exams in 4 or 5 months


    How to Preparation of CAT Exams in 4 or 5 months?

    According to many experts and students who have cleared CAT earlier, this is the right time to prepare for CAT exam.

    The first 2-3 months should be devoted to learning the basic concepts and brushing up the fundamentals of the topics. During this time, it would be a good practice to pick up mock tests and previous year question papers and begin solving them. In the beginning, the frequency of solving mock tests should be one to two in a week. This way, not only will you be able to analyze where you are in terms of your knowledge, you will also get to measure your progress as you go along with your preparation.

    Contrary to popular opinion you do not need to study for 8-12 hours every day. Even 4-hour study duration is enough, given that you are focused and attentive toward what you study.

    You will also need to categorize the topics to be covered as most difficult, difficult, moderate and easy. This will help you in a directed preparation and hone your strong topics and work harder on the weak ones.

    Learn more than one method to solve a question. During the exam, knowing more than one way of solving a question will help you in solving questions more accurately. Accuracy is one of the key factors in scoring a high percentile in CAT exam.

    While preparation makes sure that you pay equal attention to all the three sections. Sometimes students who are good in English, pay less attention to preparing for VRC section and end up scoring badly despite having a strong grasp on the language.

    Make a routine and schedule for studying and stick to it. Incorporating CAT preparation in and as your daily schedule will help you stick to your goal and will deliver desired end result.

    Toward the last phase of your preparation, do not start learning new topics. The last phase especially the weeks leading up to the exam day should be devoted to solving mock tests entirely. During this time you should be solving at least two mocks in a day.

    Many Cat toppers swear by mock test solving and have said that solving mock tests helped them get in the right mind frame and solving CAT questions while sticking to the rules of the exam became second nature to them.

    How to Preparation of CAT Exams in 4 or 5 months?


  • Requirement for CAT Application Eligibility

    Requirement for CAT Application Eligibility

    The requirement for CAT Application Eligibility


    CAT will be conducted by IIM Lucknow. While there is still time left before the official notification is out and the real race begins, why not go through the eligibility requirements essential for appearing in the exam. In general terms, anyone with a graduate degree can appear for the CAT exam. However, there are certain other conditions which must be fulfilled or else a candidate will be disqualified from appearing in the exam. There is also an important question of work experience and if it is an essential requirement for CAT or not.

    While understanding the eligibility requirement is fairly easy, sometimes students are at loggerheads with what is considered as a degree equivalent to a graduate degree. Also, students are often under the impression, that it is compulsory to have some work experience before they appear for CAT exam. In this article, we will explain both academic eligibility and the question of work experience.

    CAT Eligibility Criteria


    • A candidate applying for CAT exam must have a Bachelor’s degree in any discipline with 50% marks or equivalent CGPA.
    • The minimum percentage required for candidates belonging to Scheduled Caste (SC), Scheduled Tribe (ST), and Persons with Disability (PWD) category is 45% or equivalent CGPA.
    • The degree must have bone obtained from a university incorporated by an Act of Parliament or State Legislature in India or institution recognized by UGC or must possess an equivalent recognition from MHRD, Government of India.
    • Candidate’s appearing in the final year of their qualifying examination can also appear for the CAT exam on the condition that they produce a certificate issued by the Principal/Registrar of their university/institute stating that they have completed all the degree requirements at the time of their admission.
    • Candidates who have completed CA/CS/ICWA can also apply. The percentage requirement for these candidates will be same as mentioned in the points above.
    • The candidate must hold a Bachelor’s Degree, with at least 50% marks or equivalent CGPA (45% in case of the candidates belonging to Scheduled Caste (SC), Scheduled Tribe (ST) and Persons with Disability (PWD)/Differently Able (DA) category) awarded by any of the Universities incorporated by an act of the central or state legislature in India or other educational institutions established by an act of Parliament or declared to be deemed as a University under Section 3 of the UGC Act, 1956, or possess an equivalent qualification recognized by the Ministry of HRD, Government of India.
    • The percentage of marks obtained by the candidate in the bachelor’s degree would be calculated based on the practice followed by the university/institution from where the candidate has obtained the degree. In case the candidates are awarded grades/CGPA instead of marks, the conversion of grades/CGPA to percentage of marks would be based on the procedure certified by the university/ institution from where they have obtained the bachelor’s degree. In case the university/ institution does not have any scheme for converting CGPA into equivalent marks, the equivalence would be established by dividing the candidate’s CGPA by the maximum possible CGPA and multiplying the result with 100.
    • Candidates appearing for the final year of bachelor’s degree/equivalent qualification examination and those who have completed degree requirements and are awaiting results can also apply. If selected, such candidates will be allowed to join the programme provisionally, only if she/he submits a certificate latest by before July or August from the Principal/Registrar of her/his College/Institute (issued on) stating that the candidate has completed all the requirements for obtaining the bachelor’s degree/equivalent qualification on the date of the issue of the certificate.
    • IIMs may verify eligibility at various stages of the selection process, the details of which are provided at the website www.iimcat.ac.in. Applicants should note that the mere fulfillment of minimum eligibility criteria will not ensure consideration for shortlisting by IIMs. Prospective candidates must maintain a valid and unique email account and a phone number throughout the selection process.

    List of Equivalent Qualifications


    1. Bachelor’s degree in Engineering/Technology (4 years after 10+2/Post B.Sc./Post Diploma ) or B.E/B.Tech equivalent examinations, of Professional Societies, recognized by MHRD/UPSC/AICTE (e.g. AMIE by Institution of Engineers -India, AMICE by the Institute of Civil Engineers-India).

    2. Any Qualification recognized by Association of Indian Universities New Delhi, which is equivalent to a Bachelor’s Degree awarded by UGC recognized University/Institutions.

    3. Cases not covered above equivalency certificate to be produced by Association of Indian Universities New Delhi.

    Reservations


    • As per the Government of India requirements, 15% of the seats are reserved for Scheduled Caste (SC) and 7.5% for Scheduled Tribe (ST) candidates. 27% of seats are reserved for Other Backward Classes candidates belonging to the “non- creamy” layer (NC – OBC).
    • For an updated central list of state – wise OBCs eligible for availing the benefit of reservation and information in respect of the creamy layer, visit the website http://www.ncbc.nic.in.
    • In the case of NC – OBC category, the castes included in Central List (available at http://www.ncbc.nic.in) of NC – OBC by the National Commission of Backward Classes, Government of India as on last day of registration will be used. Any subsequent changes will not be effective for CAT.
    • As per the provision under section 39 of the PWD Act, 1995, 3% seats are reserved for Differently Abled (DA) candidates. The three categories of disability are – 1) low vision blindness, 2) hearing impairment and 3) Locomotor disability/Cerebral Palsy. This provision is applicable if the candidate suffers from any of the listed disabilities to the extent of not less than 40%, as certified by a medical authority as prescribed and explained in the said Act.
    • The candidates belonging to categories for which seats are reserved need to note and read the eligibility requirements carefully before applying. It should be noted that while it is the endeavor of IIMs that the candidates belonging to SC/ST/PWD/Non – Creamy OBC categories join the Programme in proportions mandated by the law, they have to meet the minimum eligibility criteria and a certain minimum level of performance in the admission process.
    • The candidates should read carefully the description of admission process followed by each IIM on their respective websites. No change in the category will be entertained after the closure of registration window. Hence, applicants are advised to give attention while registering.

    Note for SC/ST, NC- OBC, and DA Candidates


    • If you belong to SC or ST categories, your caste/tribe must be listed in the Government of India schedule. The caste certificate that you send to IIM should be in the Government approved format and should clearly state: (a) Name of your caste/tribe; (b) Whether you belong to Scheduled Caste or Scheduled Tribe; (c) District and the State or Union Territory of your ordinary residence; and (d) the appropriate Government of India schedule under which your caste/tribe is approved by it as Scheduled Caste or Scheduled Tribe.
    • A copy of the SC/ST and /or PWD (DA) certificate(s) must be uploaded at the time of CAT Application online. Failure to upload a copy of the caste/class certificate will result in the rejection of your CAT registration.
    • The SC/ST and/or PWD (DA) certificate(s) must be shown and a photocopy should be submitted at the time of interviews. Moreover, the certificate(s) must be submitted at the time of joining programs of any of the IIMs.
    • If you belong to the Non – Creamy Other Backward Classes (NC – OBC), you must produce the NC – OBC certificate duly signed by the competent authority and enclose its photocopy at the time of interviews. Moreover, the certificate must be submitted at the time of joining programs of any of the IIMs. Failure to do so during the post CAT selection process will result in you not being considered under the reserved category.

    Work Experience Requirement


    Having a prior work experience is not a compulsory requirement for candidates appearing in CAT. The application form does ask for work experience details but it is not mandatory.

    However, during the final selection of candidates, work experience is given some weight-age by the IIMs. The exact weight-age given to work experience is never revealed by the IIMs but it varies from IIM to IIM.

    The bottom line is that work experience does not come into play till the last stage of final selection and even then it does not play a deciding factor. IIMs have been known to have a mixed batch comprising of both fresh graduates and students with work experience.

    Requirement for CAT Application Eligibility


  • The Steps with Advantages and Disadvantages of Strategic Management

    The Steps with Advantages and Disadvantages of Strategic Management

    The Steps with Advantages and Disadvantages of Strategic Management; Strategy: The word “strategy” derives from the Greek word “stratçgos”; stratus (meaning army) and “ago” (meaning leading/moving). A strategy is an action that managers take to attain one or more of the organization’s goals. The strategy can also be defined as “A general direction set for the company and its various components to achieve a desired state in the future. Strategy results from the detailed strategic planning process”.

    Here explains you’ll read and learn; The Steps with Advantages and Disadvantages of Strategic Management.

    A strategy is all about integrating organizational activities and utilizing and allocating the scarce resources within the organizational environment to meet the present objectives. While planning a strategy it is essential to consider that decisions do not take in a Vaccum and that any action taken by a firm is likely to be met by a reaction from those affected, competitors, customers, employees or suppliers.

    The strategy can also define as knowledge of the goals, the uncertainty of events and the need to take into consideration the likely or actual behavior of others. The strategy is the blueprint of decisions in an organization that shows its objectives and goals, reduces the key policies, and plans for achieving these goals, and defines the business the company is to carry on, the type of economic and human organization it wants to be, and the contribution it plans to make to its shareholders, customers, and society at large.

    Steps of Strategic Management:

    The strategic management plan has various facets that are being discussed here. The strategies are applied to have proper planning and appropriate allocation of funds for the accomplishment of the goals of the company.

    Step 1: Formulation;

    The formulation of the strategies essentially involves the environment within which every company has to survive. Here various important decisions make to figure out how the company will reach out to the competition. Here the external environmental analysis is done. The political, economic, legal and social aspects are assessed during the formulation of the strategies.

    1. Industry Environment:

    The strategic decision-maker checks for the competitor environment. They try to assess the resources available to the rivals and also their bargaining power with the customers. It is also important to understand the trend of the suppliers; check if there are any latest threats of the new entrants in the industry.

    2. Internal Environment:

    Though most of the companies do not take care of the internal environment this is amongst the most important while implementing strategic management fundamentals. You should have a clear SWOT analysis of your employees, processes, and resources.

    Step 2: Implementation;

    This is the next important step in strategic management – here the management has to decide as to how the resources will be utilized to reach out to the goals formulated by the company. The implementation phase also checks how the resources of the organization have been structured.

    Advantages of Strategic Management Process:

    The process of strategic management is a comprehensive collection of different types of continuous activities and also the processes which use in the organization. Strategic management is a way to transform the existing static plan in a proper systematic process. Strategic management can have some immediate changes in the organization. The following mentioned are few advantages of strategic management;

    1. Making a better future;

    There is always a difference between reactive and proactive actions. When a company practices strategic management – the company will always be on the defensive side and not on the offensive end. You need to come out victorious in the competitive situation and not be a victim of the situation. It is not possible to foresee every situation but if you know that there are chances of certain situations then it is always better to keep your weapons ready to fight the situation.

    2. Identifying the directions;

    Strategic management essentially and clearly defines the goals and mission of the company. The main purpose of this management is to define realistic objectives and goals – this has to be in line with the vision of the company. Strategic management provides a base for the organization based on which progress can measure and base on the same, the employees can compensate.

    3. Better business decisions;

    It is important to understand the difference between a great idea and a good idea. If you do have a proper and clear vision of your company – then having a mission and methods to achieve the mission always seems to be a very good idea. It turns into a great idea when you decide what is the type of project that you want to invest your money; how do you plan to invest your time and also utilize the time of your employees. Once you are clear with your ideas about the project and the time each of your employees and you will have to allocate, you will need to focus your attention on the financial and human resources.

    4. The longevity of the business;

    The times are changing fast and dynamic changes are happening every day. The industries worldwide are changing at a fast pace and hence survival is difficult for those companies which do not have a strong and perfect base in the industry. The strategic management ensures that the company has a thorough stand in the related industry and the experts also make sure that the company is not just surviving on luck and better chances or opportunity.

    When you look at various studies you would know that the industries which are not following the strategic management will survive for not more than five years. This suggests that companies should have a powerful focus on the longevity of the business. This suggests that without strategic management, a company can’t survive in the long run.

    5. Increasing market share and profitability;

    With the help of strategic management, it is possible to increase the market share and also the profitability of the company in the market. If you have a very focused plan and strategic thinking then all the industries can explore better customer segments, products and services and also to understand the market conditions of the industry in which you are operating in. Strategic management skills will help you to approach the right target market. The experts will guide for better sales and marketing approaches. You can also have a better network of distribution and also help you to take business decisions which at the end of the day results in profit.

    6. Avoiding competitive convergence;

    Most of the companies have become so used to focusing on the competitors that they have started imitating their good practices. It has become so much competition that it is becoming difficult to part the companies or identify them differently. With the help of strategic management this magic is possible; try and learn all the best practices of a company and become a unique identity that will keep you apart from your competitors.

    7. Financial advantages;

    The firms which follow the process of strategic management prove to have more profits over some time as compared to the companies that do not opt for strategic management decisions. Those firms which involve in using strategic management use the right method of planning; these companies have excellent control over their future. They have a proper budget for their future projects; hence these businesses continue for a long time in the industry.

    8. Non-financial advantages;

    Besides the financial benefits, the companies using strategic management also provides various non-financial benefits. The experts informed that the firms which practice strategic management are always ready to defeat the external threats. They have a better understanding of the strengths and weaknesses of the competitor and hence they can withstand the competition. This paves way for better performance and rewards for the company over some time. The main feature of this management system is that it has the capacity for problem prevention and problem-solving skills. It also helps in bringing about discipline in the firm for all types of internal and external processes.

    Disadvantages of Strategic Management Process:

    The process of strategic management includes a set of long-term goals and objectives of the company; using this method helps the company is facing competition in a better manner and also increase its capabilities. These are some of the benefits but every coin has two sides – the same is the case with strategic management. Here are some of the limitations of strategic management;

    1. Complex process;

    The strategic management includes various types of continuous process which checks all type of major critical components. This includes the internal and external environments, long term and short term goals, strategic control of the company’s resources and last but not least it also has to check the organizational structure. This is a lengthy process because a change in one component can affect all the factors.

    Hence one must understand the issues with all the concerned factors. This generally takes time and in the end, the growth of the company affects. Being a complex process it calls for lots of patience and time from the management to implement the strategic management. To have proper strategic management, there should be strong leadership and properly structured resources.

    2. Time taking process;

    To implement strategic management, the top management must spend proper quality time to get the process right. The managers have to spend a lot of time researching, preparing and informing the employees about this new management. This type of long term and time-consuming training and orientation would hamper the regular activities of the company. The day to day operations are negatively impacted and in the long term, it could affect the business adversely.

    E.g. many issues require daily attention but this is not taken care of because they are busy researching the details about strategic management. In case, the proper resolution of the problems is not done on time then there could be a great amount of attrition increase. Besides this, the performance of the employees will also go down; because, they are not getting the required resolution of their problems. This type of situation may lead the management to divert all their critical resources towards employee performance and motivation; while doing this your strategic management process will be sidelined.

    3. Tough implementation;

    When we speak the word strategic management then it seems to be a huge and large word. But it is also a fact that the implementation of this management system is difficult as compared to other management techniques. The implementation process calls for perfect communication among the employees and employers. Strategic management has to be implemented in such a way that the employees have to remain fully attentive; there should be active participation among the employees and besides this, the employees have to be accountable for their work.

    This accountability means not only for the top management but for all employees across the hierarchy. The experts mentioned that implementation is difficult because they have to continuously strive to make the employees aware of the process and benefits of this system. E.g. if a manager was involved in forming the strategic process and he/she has not been involved in the implementation process then the manager will never be accountable for any processes in the company.

    4. Proper planning;

    When we say management systems then it calls for perfect planning. You just cannot write things on paper and leave them. This calls for proper practical planning. This is not possible by just one person but it is a team effort. When these types of processes are to implement then you need to sideline various regular decision-making activities that would adversely affect the business in the long run.

    Short Review: 

    In recent years, most of the firms have understood the importance of strategic management – it plays a key role in the upbringing and downfall of any company. In a nutshell, we can conclude that the purpose of strategic management is possible if a company can provide dedicated resources; and, staff to formulate and implement the entire system. If strategic management is implemented in the company thoroughly then there is no doubt that the company will survive all types of odds; and, competition and remain in the market for a long period.

    This is required in the present situation for all companies. It just calls for proper planning and the right people to implement them in the company. You need to keep a regular check on all external and internal factors affecting your industry; besides this check all your financial resources whether they are enough to expand your business. If you could keep in mind these things the implementation will become very easy and quick for any organization irrespective of their sizes.

    The Steps with Advantages and Disadvantages of Strategic Management
    The Steps with Advantages and Disadvantages of Strategic Management
  • What are Benefits of Strategic Management?

    What are Benefits of Strategic Management?

    What are the Benefits of Strategic Management? Strategic management essentially means the implementation and formulation of various strategies to achieve the goals of the company. This is the detailed initiative that is taken by the top management; these strategic decisions are taken based on available resources; they also take into consideration the effects of the external and internal environment on their decisions.

    Here explains read and learn; Benefits of Strategic Management with its advantages and disadvantages: 

    There are many benefits of strategic management and they include identification, prioritization, and exploration of opportunities. For instance, newer products, newer markets, and newer forays into business lines are only possible if firms indulge in strategic planning. Next, strategic management allows firms to take an objective view of the activities being done by it; and, do a cost-benefit analysis as to whether the firm is profitable.

    Just to differentiate, by this, we do not mean the financial benefits alone (which would be discussed below); but, also the assessment of profitability that has to do with evaluating whether the business is strategically aligned to its goals and priorities.

    The key point to note here is that strategic management allows a firm to orient itself to its market and consumers; and, ensure that it is actualizing the right strategy.

    1] Financial Benefits;

    It has been shown in many studies that firms that engage in strategic management are more profitable; and, successful than those that do not have the benefit of strategic planning and strategic management.

    When firms engage in forwarding looking planning and careful evaluation of their priorities, they have control over the future; which is necessary for the fast-changing business landscape of the 21st century.

    It has been estimated that more than 100,000 businesses fail in the US every year and most of these failures are to do with a lack of strategic focus and strategic direction. Further, high performing firms tend to make more informed decisions; because they have considered both the short-term and long-term consequences and hence, have oriented their strategies accordingly. In contrast, firms that do not engage themselves in meaningful strategic planning are often bogged down by internal problems and lack of focus that leads to failure.

    2] Non-Financial Benefits:

    The section above discussed some of the tangible benefits of strategic management. Apart from these benefits, firms that engage in strategic management are more aware of external threats; an improved understanding of competitor strengths and weaknesses and increased employee productivity. They also have a lesser resistance to change and a clear understanding of the link between performance and rewards.

    The key aspect of strategic management is that the problem solving and problem preventing capabilities of the firms enhance through strategic management. Strategic management is essential as it helps firms to rationalize change and actualize change; and, communicate the need to change better to their employees. Finally, strategic management helps in bringing order and discipline to the activities of the firm in both internal processes and external activities.

    3] Closing Thoughts;

    In recent years, virtually all firms have realized the importance of strategic management. However, the key difference between those who succeed and those who fail is how strategic management is done and strategic planning is carried out makes the difference between success and failure. Of course, there are still firms that do not engage in strategic planning or where the planners do not receive support from management. As well as, These firms ought to realize the benefits of strategic management and ensure their longer-term viability and success in the marketplace.

    The Advantages of Strategic Management;

    The following advantages below are;

    1] Discharges Board Responsibility;

    The first reason that most organizations state for having a strategic management process is that it discharges the responsibility of the Board of Directors.

    2] Forces An Objective Assessment;

    Strategic management provides a discipline that enables the board; and, senior management to take a step back from the day-to-day business to think about the future of the organization. Without this discipline, the organization can become solely consumed with working through the next issue or problem without consideration of the larger picture.

    3] Provides a Framework For Decision-Making;

    The strategy provides a framework within which all staff can make day-to-day operational decisions; and, understand that those decisions are all moving the organization in a single direction. It is not possible (nor realistic or appropriate) for the board to know all the decisions the executive director will have to make, nor is it possible (nor realistic or practical) for the executive director to know all the decisions the staff will make.

    The strategy provides a vision of the future, confirms the purpose and values of an organization, sets objectives, clarifies threats and opportunities, determines methods to leverage strengths, and mitigate weaknesses (at a minimum). As such, it sets a framework and clear boundaries within which decisions can be made. Also, the cumulative effect of these decisions (which can add up to thousands over the year) can have a significant impact on the success of the organization. Providing a framework within which the executive director and staff can make these decisions helps them better focus their efforts on those things that will best support the organization’s success.

    4] Supports Understanding & Buy-In;

    Allowing the board and staff participation in the strategic discussion enables them to better understand the direction; why that direction was chosen, and the associated benefits. For some people simply knowing is enough; for many people, to gain their full support requires them to understand.

    5] Enables Measurement of Progress;

    A strategic management process forces an organization to set objectives and measures of success. Also, the set of measures of success requires that the organization first determine; what is critical to its ongoing success and then force the establishment of objectives and keeps; these critical measures in front of the board and senior management.

    6] Provides an Organizational Perspective;

    Addressing operational issues rarely looks at the whole organization and the interrelatedness of its varying components. Strategic management takes an organizational perspective and looks at all the components and the interrelationship between those components to develop a strategy that is optimal for the whole organization and not a single component.

    The Disadvantages of Strategic Management;

    The following disadvantages below are;

    1] The Future Doesn’t Unfold As Anticipated;

    One of the major criticisms of strategic management is that it requires the organization to anticipate the future environment to develop plans, and as we all know, predicting the future is not an easy undertaking. The belief is that if the future does not unfold as anticipated then it may invalidate the strategy taken. Recent research conducted in the private sector has demonstrated that organizations that use the planning process achieve better performance than those organizations that don’t plan; regardless of whether they achieved their intended objective. Also, there are a variety of approaches to strategic planning that are not as dependent upon the prediction of the future.

    2] It Can Be Expensive;

    There is no doubt that in the not-for-profit sector there are many organizations that cannot afford to hire an external consultant to help them develop their strategy. As well as, Today many volunteers can help smaller organizations; and, also funding agencies that will support the cost of hiring external consultants in developing a strategy. Regardless, it is important to ensure that the implementation of a strategic management process is consistent with the needs of the organization; and, that appropriate controls are implemented to allow the cost/benefit discussion to be undertaken, before the implementation of a strategic management process.

    3] Long Term Benefit vs. Immediate Results;

    Strategic management processes design to provide an organization with long-term benefits. If you are looking at the strategic management process to address an immediate crisis within your organization, it won’t. It always makes sense to address the immediate crises before allocating resources (time, money, people, opportunity, cost) to the strategic management process.

    4] Impedes Flexibility;

    When you undertake a strategic management process; it will result in the organization saying “no” to some of the opportunities that may be available. This inability to choose all of the opportunities presented to an organization is sometimes frustrating. Also, some organizations develop a strategic management process that becomes excessively formal. Processes that become this “established” lack innovation and creativity and can stifle the ability of the organization to develop creative strategies. In this scenario, the strategic management process has become the very tool that now inhibits the organization’s ability to change and adapt.

    A third way that flexibility can be impeded is through a well-executed alignment and integration of the strategy within the organization. An organization that is well-aligned with its strategy has addressed its structure, board, staffing, and performance and reward systems. This alignment ensures that the whole organization is pulling in the right direction, but can inhibit the organization’s adaptability. Again, there are a variety of newer approaches to strategy development used in the private sector (they haven’t been widely accepted in the not-for-profit sector yet); that build strategy and address the issues of organizational adaptability.

    What are Benefits of Strategic Management?
    Benefits of Strategic Management.
  • Validity

    Validity

    What is Validity?


    The most crucial issue in test construction is validity. Whereas reliability addresses issues of consistency, validity assesses what the test is to be accurate about. A test that is valid for clinical assessment should measure what it is intended to measure and should also produce information useful to clinicians. A psychological test cannot be said to be valid in any abstract or absolute sense, but more practically, it must be valid in a particular context and for a specific group of people (Messick, 1995). Although a test can be reliable without being valid, the opposite is not true; a necessary prerequisite for validity is that the test must have achieved an adequate level of reliability. Thus, a valid test is one that accurately measures the variable it is intended to measure. For example, a test comprising questions about a person’s musical preference might erroneously state that it is a test of creativity. The test might be reliable in the sense that if it is given to the same person on different occasions, it produces similar results each time. However, it would not be reliable in that an investigation might indicate it does not correlate with other more valid measurements of creativity.

    Establishing the validity of a test can be extremely difficult, primarily because psychological variables are usually abstract concepts such as intelligence, anxiety, and personality. These concepts have no tangible reality, so their existence must be inferred through indirect means. In addition, conceptualization and research on constructs undergo change over time requiring that test validation go through continual refinement (G. Smith & McCarthy, 1995). In constructing a test, a test designer must follow two necessary, initial steps. First, the construct must be theoretically evaluated and described; second, specific operations (test questions) must be developed to measure it (S. Haynes et al., 1995). Even when the designer has followed these steps closely and conscientiously, it is sometimes difficult to determine what the test really measures. For example, IQ tests are good predictors of academic success, but many researchers question whether they adequately measure the concept of intelligence as it is theoretically described. Another hypothetical test that, based on its item content, might seem to measure what is described as musical aptitude may in reality be highly correlated with verbal abilities. Thus, it may be more a measure of verbal abilities than of musical aptitude.

    Any estimate of validity is concerned with relationships between the test and some external independently observed event. The Standards for Educational and Psychological Testing, American Educational Research Association [AERA], American Psychological Association [APA], & National Council for Measurement in Education [NCME], 1999; G. Morgan, Gliner, & Harmon, 2001) list the three main methods of establishing validity as content-related, criterion-related, and construct-related.

    Content Validity


    During the initial construction phase of any test, the developers must first be concerned with its content validity. This refers to the representativeness and relevance of the assessment instrument to the construct being measured. During the initial item selection, the constructors must carefully consider the skills or knowledge area of the variable they would like to measure. The items are then generated based on this conceptualization of the variable. At some point, it might be decided that the item content over-represents, under-represents, or excludes specific areas, and alterations in the items might be made accordingly. If experts on subject matter are used to determine the items, the number of these experts and their qualifications should be included in the test manual. The instructions they received and the extent of agreement between judges should also be provided. A good test covers not only the subject matter being measured, but also additional variables. For example, factual knowledge may be one criterion, but the application of that knowledge and the ability to analyze data are also important. Thus, a test with high content validity must cover all major aspects of the content area and must do so in the correct proportion.

    A concept somewhat related to content validity is face validity. These terms are not synonymous, however, because content validity pertains to judgments made by experts, whereas face validity concerns judgments made by the test users. The central issue in face validity is test rapport. Thus, a group of potential mechanics who are being tested for basic skills in arithmetic should have word problems that relate to machines rather than to business transactions. Face validity, then, is present if the test looks good to the persons taking it, to policymakers who decide to include it in their programs, and to other untrained personnel. Despite the potential importance of face validity in regard to test-taking attitudes, disappointingly few formal studies on face validity are performed and/or reported in test manuals.

    In the past, content validity has been conceptualized and operationalized as being based on the subjective judgment of the test developers. As a result, it has been regarded as the least preferred form of test validation, albeit necessary in the initial stages of test development. In addition, its usefulness has been primarily focused at achievement tests (how well has this student learned the content of the course?) and personnel selection (does this applicant know the information relevant to the potential job?). More recently, it has become used more extensively in personality and clinical assessment (Butcher, Graham, Williams, & Ben-Porath, 1990; Millon, 1994). This has paralleled more rigorous and empirically based approaches to content validity along with a closer integration to criterion and construct validation.

    Criterion Validity


    A second major approach to determining validity is criterion validity, which has also been called empirical or predictive validity. Criterion validity is determined by comparing test scores with some sort of performance on an outside measure. The outside measure should have a theoretical relation to the variable that the test is supposed to measure. For example, an intelligence test might be correlated with grade point average; an aptitude test, with independent job ratings or general maladjustment scores, with other tests measuring similar dimensions. The relation between the two measurements is usually expressed as a correlation coefficient.

    Criterion-related validity is most frequently divided into either concurrent or predictive validity. Concurrent validity refers to measurements taken at the same, or approximately the same, time as the test. For example, an intelligence test might be administered at the same time as assessments of a group’s level of academic achievement. Predictive validity refers to outside measurements that were taken some time after the test scores were derived. Thus, predictive validity might be evaluated by correlating the intelligence test scores with measures of academic achievement a year after the initial testing. Concurrent validation is often used as a substitute for predictive validation because it is simpler, less expensive, and not as time consuming. However, the main consideration in deciding whether concurrent or predictive validation is preferable depends on the test’s purpose. Predictive validity is most appropriate for tests used for selection and classification of personnel. This may include hiring job applicants, placing military personnel in specific occupational training programs, screening out individuals who are likely to develop emotional disorders, or identifying which category of psychiatric populations would be most likely to benefit from specific treatment approaches. These situations all require that the measurement device provide a prediction of some future outcome. In contrast, concurrent validation is preferable if an assessment of the client’s current status is required, rather than a prediction of what might occur to the client at some future time. The distinction can be summarized by asking “Is Mr. Jones maladjusted?” (concurrent validity) rather than “Is Mr. Jones likely to become maladjusted at some future time?” (predictive validity).

    An important consideration is the degree to which a specific test can be applied to a unique work-related environment (see Hogan, Hogan, & Roberts, 1996). This relates more to the social value and consequences of the assessment than the formal validity as reported in the test manual (Messick, 1995). In other words, can the test under consideration provide accurate assessments and predictions for the environment in which the examinee is working? To answer this question adequately, the examiner must refer to the manual and assess the similarity between the criteria used to establish the test’s validity and the situation to which he or she would like to apply the test. For example, can an aptitude test that has adequate criterion validity in the prediction of high school grade point average also be used to predict academic achievement for a population of college students? If the examiner has questions regarding the relative applicability of the test, he or she may need to undertake a series of specific tasks. The first is to identify the required skills for adequate performance in the situation involved. For example, the criteria for a successful teacher may include such attributes as verbal fluency, flexibility, and good public speaking skills. The examiner then must determine the degree to which each skill contributes to the quality of a teacher’s performance. Next, the examiner has to assess the extent to which the test under consideration measures each of these skills. The final step is to evaluate the extent to which the attribute that the test measures are relevant to the skills the examiner needs to predict. Based on these evaluations, the examiner can estimate the confidence that he or she places in the predictions developed from the test. This approach is sometimes referred to as synthetic validity because examiners must integrate or synthesize the criteria reported in the test manual with the variables they encounter in their clinical or organizational settings.

    The strength of criterion validity depends in part on the type of variable being measured. Usually, intellectual or aptitude tests give relatively higher validity coefficients than personality tests because there are generally a greater number of variables influencing personality than intelligence. As the number of variables that influences the trait being measured increases, it becomes progressively more difficult to account for them. When a large number of variables are not accounted for, the trait can be affected in unpredictable ways. This can create a much wider degree of fluctuation in the test scores, thereby lowering the validity coefficient. Thus, when evaluating a personality test, the examiner should not expect as high a validity coefficient as for intellectual or aptitude tests. A helpful guide is to look at the validities found in similar tests and compare them with the test being considered. For example, if an examiner wants to estimate the range of validity to be expected for the extra-version scale on the Myers Briggs Type Indicator, he or she might compare it with the validities for similar scales found in the California Personality Inventory and Eysenck Personality Questionnaire. The relative level of validity, then, depends both on the quality of the construction of the test and on the variable being studied.

    An important consideration is the extent to which the test accounts for the trait being measured or the behavior being predicted. For example, the typical correlation between intelligence tests and academic performance is about .50 (Neisser et al., 1996). Because no one would say that grade point average is entirely the result of intelligence, the relative extent to which intelligence determines grade point average has to be estimated. This can be calculated by squaring the correlation coefficient and changing it into a percentage. Thus, if the correlation of .50 is squared, it comes out to 25%, indicating that 25% of academic achievement can be accounted for by IQ as measured by the intelligence test. The remaining 75% may include factors such as motivation, quality of instruction, and past educational experience. The problem facing the examiner is to determine whether 25% of the variance is sufficiently useful for the intended purposes of the test. This ultimately depends on the personal judgment of the examiner.

    The main problem confronting criterion validity is finding an agreed-on, definable, acceptable, and feasible outside criterion. Whereas for an intelligence test the grade point average might be an acceptable criterion, it is far more difficult to identify adequate criteria for most personality tests. Even with so-called intelligence tests, many researchers argue that it is more appropriate to consider them tests of scholastic aptitude rather than of intelligence. Yet another difficulty with criterion validity is the possibility that the criterion measure will be inadvertently biased. This is referred to as criterion contamination and occurs when knowledge of the test results influences an individual’s later performance. For example, a supervisor in an organization who receives such information about subordinates may act differently toward a worker placed in a certain category after being tested. This situation may set up negative or positive expectations for the worker, which could influence his or her level of performance. The result is likely to artificially alter the level of the validity coefficients. To work around these difficulties, especially in regard to personality tests, a third major method must be used to determine validity. 

    Construct Validity


    The method of construct validity was developed in part to correct the inadequacies and difficulties encountered with content and criterion approaches. Early forms of content validity relied too much on subjective judgment, while criterion validity was too restrictive in working with the domains or structure of the constructs being measured. Criterion validity had the further difficulty in that there was often a lack of agreement in deciding on adequate outside criteria. The basic approach of construct validity is to assess the extent to which the test measures a theoretical construct or trait. This assessment involves three general steps. Initially, the test constructor must make a careful analysis of the trait. This is followed by a consideration of the ways in which the trait should relate to other variables. Finally, the test designer needs to test whether these hypothesized relationships actually exist (Foster & Cone, 1995). For example, a test measuring dominance should have a high correlation with the individual accepting leadership roles and a low or negative correlation with measures of submissiveness. Likewise, a test measuring anxiety should have a high positive correlation with individuals who are measured during an anxiety-provoking situation, such as an experiment involving some sort of physical pain. As these hypothesized relationships are verified by research studies, the degree of confidence that can be placed in a test increases.

    There is no single, best approach for determining construct validity; rather, a variety of different possibilities exist. For example, if some abilities are expected to increase with age, correlations can be made between a population’s test scores and age. This may be appropriate for variables such as intelligence or motor coordination, but it would not be applicable for most personality measurements. Even in the measurement of intelligence or motor coordination, this approach may not be appropriate beyond the age of maturity. Another method for determining construct validity is to measure the effects of experimental or treatment interventions. Thus, a posttest measurement may be taken following a period of instruction to see if the intervention affected the test scores in relation to a previous pretest measure. For example, after an examinee completes a course in arithmetic, it would be predicted that scores on a test of arithmetical ability would increase. Often, correlations can be made with other tests that supposedly measure a similar variable. However, a new test that correlates too highly with existing tests may represent needless duplication unless it incorporates some additional advantage such as a shortened format, ease of administration, or superior predictive validity. Factor analysis is of particular relevance to construct validation because it can be used to identify and assess the relative strength of different psychological traits. Factor analysis can also be used in the design of a test to identify the primary factor or factors measured by a series of different tests. Thus, it can be used to simplify one or more tests by reducing the number of categories to a few common factors or traits. The factorial validity of a test is the relative weight or loading that a factor has on the test. For example, if a factor analysis of a measure of psychopathology determined that the test was composed of two clear factors that seemed to be measuring anxiety and depression, the test could be considered to have factorial validity. This would be especially true if the two factors seemed to be accounting for a clear and large portion of what the test was measuring.

    Another method used in construct validity is to estimate the degree of internal consistency by correlating specific subtests with the test’s total score. For example, if a subtest on an intelligence test does not correlate adequately with the overall or Full Scale IQ, it should be either eliminated or altered in a way that increases the correlation. A final method for obtaining construct validity is for a test to converge or correlate highly with variables that are theoretically similar to it. The test should not only show this convergent validity but also have discriminate validity, in which it would demonstrate low or negative correlations with variables that are dissimilar to it. Thus, scores on reading comprehension should show high positive correlations with performance in a literature class and low correlations with performance in a class involving mathematical computation.

    Related to discriminant and convergent validity is the degree of sensitivity and specificity an assessment device demonstrates in identifying different categories. Sensitivity refers to the percentage of true positives that the instrument has identified, whereas specificity is the relative percentage of true negatives. A structured clinical interview might be quite sensitive in that it would accurately identify 90% of schizophrenics in an admitting ward of a hospital. However, it may not be sufficiently specific in that 30% of schizophrenics would be incorrectly classified as either normal or having some other diagnosis. The difficulty in determining sensitivity and specificity lies in developing agreed-on, objectively accurate outside criteria for categories such as psychiatric diagnosis, intelligence, or personality traits.

    As indicated by the variety of approaches discussed, no single, quick, efficient method exists for determining construct validity. It is similar to testing a series of hypotheses in which the results of the studies determine the meanings that can be attached to later test scores (Foster & Cone, 1995; Messick, 1995). Almost any data can be used, including material from the content and criterion approaches. The greater the amount of supporting data, the greater is the level of confidence with which the test can be used. In many ways, construct validity represents the strongest and most sophisticated approach to test construction. In many ways, all types of validity can be considered as subcategories of construct validity. It involves theoretical knowledge of the trait or ability being measured, knowledge of other related variables, hypothesis testing, and statements regarding the relationship of the test variable to a network of other variables that have been investigated. Thus, construct validation is a never-ending process in which new relationships always can be verified and investigated.


  • Reliability: Definition, Methods, and Example

    Reliability: Definition, Methods, and Example

    Uncover the true definition of reliability. Understand why reliability is crucial for machines, systems, and test results to perform consistently and accurately. What is Reliability? The quality of being trustworthy or performing consistently well. The degree to which the result of a measurement, calculation, or specification can depend on to be accurate.

    Here expiration of Reliability with their topic Definition, Methods, and Example.

    Definition of Reliability? The ability of an apparatus, machine, or system to consistently perform its intended or required function or mission, on-demand, and without degradation or failure.

    Manufacturing: The probability of failure-free performance over an item’s useful life, or a specified time-frame, under specified environmental and duty-cycle conditions. Often expressed as mean time between failures (MTBF) or reliability coefficient. Also called quality over time.

    Consistency and validity of test results determined through statistical methods after repeated trials.

    The reliability of a test refers to its degree of stability, consistency, predictability, and accuracy. It addresses the extent to which scores obtained by a person are the same if the person is reexamined by the same test on different occasions. Underlying the concept of reliability is the possible range of error, or error of measurement, of a single score.

    This is an estimate of the range of possible random fluctuation that can expect in an individual’s? score. It should stress; however, that a certain degree of error or noise is always present in the system; from such factors as a misreading of the items, poor administration procedures; or the changing mood of the client. If there is a large degree of random fluctuation; the examiner cannot place a great deal of confidence in an individual’s scores.

    Testing in Trials:

    The goal of a test constructor is to reduce, as much as possible; the degree of measurement error, or random fluctuation. If this is achieved, the difference between one score and another for a measured characteristic is more likely to result from some true difference than from some chance fluctuation. Two main issues related to the degree of error in a test. The first is the inevitable, natural variation in human performance.

    Usually, the variability is less for measurements of ability than for those of personality. Whereas ability variables (intelligence, mechanical aptitude, etc.) show gradual changes resulting from growth and development; many personality traits are much more highly dependent on factors such as mood. This is particularly true in the case of a characteristic such as anxiety.

    The practical significance of this in evaluating a test is that certain factors outside the test itself can serve to reduce the reliability that the test can realistically expect to achieve. Thus, an examiner should generally expect higher reliabilities for an intelligence test than for a test measuring a personality variable such as anxiety. It is the examiner’s responsibility to know what being measure; especially the degree of variability to expect in the measured trait.

    The second important issue relating to reliability is that psychological testing methods are necessarily imprecise. For the hard sciences, researchers can make direct measurements such as the concentration of a chemical solution; the relative weight of one organism compared with another, or the strength of radiation. In contrast, many constructs in psychology are often measured indirectly.

    For example;

    Intelligence cannot perceive directly; it must infer by measuring behavior that has been defined as being intelligent. Variability relating to these inferences is likely to produce a certain degree of error resulting from the lack of precision in defining and observing inner psychological constructs. Variability in measurement also occurs simply; because people have true (not because of test error) fluctuations in performance between one testing session and the next.

    Whereas it is impossible to control for the natural variability in human performance; adequate test construction can attempt to reduce the imprecision that is a function of the test itself. Natural human variability and test imprecision make the task of measurement extremely difficult. Although some error in testing is inevitable; the goal of test construction is to keep testing errors within reasonably accepted limits.

    A high correlation is generally .80 or more, but the variable being measured also changes the expected strength of the correlation. Likewise, the method of determining reliability alters the relative strength of the correlation. Ideally, clinicians should hope for correlations of .90 or higher in tests that are used to make decisions about individuals, whereas a correlation of .70 or more is generally adequate for research purposes.

    Methods of reliability:

    The purpose of reliability is to estimate the degree of test variance caused by the error. The four primary methods of obtaining reliability involve determining;

    • The extent to which the test produces consistent results on retesting (test-retest).
    • The relative accuracy of a test at a given time (alternate forms).
    • Internal consistency of the items (split half), and.
    • Degree of agreement between two examiners (inter-scorer).

    Another way to summarize this is that reliability can be time to time (test-retest), form to form (alternate forms), item to item (split half), or scorer to scorer (inter-scorer). Although these are the main types of reliability, there is a fifth type, the Kuder-Richardson; like the split-half, it is a measurement of the internal consistency of the test items. However, because this method is considered appropriate only for tests that are relatively pure measures of a single variable, it does not cover in this book. 

    Test-Retest Reliability:

    Test-retest reliability is determined by administering the test and then repeating it on a second occasion. The reliability coefficient is calculated by correlating the scores obtained by the same person on the two different administrations. The degree of correlation between the two scores indicates the extent to which the test scores can generalize from one situation to the next.

    If the correlations are high, the results are less likely to cause by random fluctuations in the condition of the examinee or the testing environment. Thus, when the test is being used in actual practice; the examiner can be relatively confident that differences in scores are the result of an actual change in the trait being measured rather than random fluctuation.

    Several factors must consider in assessing the appropriateness of test-retest reliability. One is that the interval between administrations can affect reliability. Thus, a test manual should specify the interval as well as any significant life changes that the examinees may have experienced such as counseling, career changes, or psychotherapy.

    For example;

    Tests of preschool intelligence often give reasonably high correlations if the second administration is within several months of the first one. However, correlations with later childhood or adult IQ are generally low because of innumerable intervening life changes. One of the major difficulties with test-retest reliability is the effect that practice and memory may have on performance; which can produce improvement between one administration and the next.

    This is a particular problem for speeded and memory tests such as those found on the Digit Symbol and Arithmetic sub-tests of the WAIS-III. Additional sources of variation may be the result of random, short-term fluctuations in the examinee, or variations in the testing conditions. In general, test-retest reliability is the preferred method only if the variable being measured is relatively stable. If the variable is highly changeable (e.g., anxiety), this method is usually not adequate. 

    Alternate Forms:

    The alternate forms method avoids many of the problems encountered with test-retest reliability. The logic behind alternate forms is that; if the trait measures several times on the same individual by using parallel forms of the test; the different measurements should produce similar results. The degree of similarity between the scores represents the reliability coefficient of the test.

    As in the test-retest method, the interval between administrations should always include in the manual as well as a description of any significant intervening life experiences. If the second administration gave immediately after the first; the resulting reliability is more a measure of the correlation between forms and not across occasions.

    More things:

    Correlations determined by tests given with a wide interval; such as two months or more provide a measure of both the relation between forms and the degree of temporal stability. The alternate forms method eliminates many carryover effects; such as the recall of previous responses the examinee has made to specific items.

    However, there is still likely to be some carryover effect in that the examinee can learn to adapt to the overall style of the test even when the specific item content between one test and another is unfamiliar. This is most likely when the test involves some sort of problem-solving strategy in which the same principle in solving one problem can use to solve the next one.

    An examinee, for example, may learn to use mnemonic aids to increase his or her performance on an alternate form of the WAIS-III Digit Symbol subtest. Perhaps the primary difficulty with alternate forms lies in determining whether the two forms are equivalent.

    For example;

    If one test is more difficult than its alternate form, the difference in scores may represent actual differences in the two tests rather than differences resulting from the unreliability of the measure. Because the test constructor is attempting to measure the reliability of the test itself and not the differences between the tests, this could confound and lower the reliability coefficient.

    Alternate forms should independently construct tests that use the same specifications, including the same number of items, type of content, format, and manner of administration. A final difficulty encounters primarily when there is a delay between one administration and the next. With such a delay, the examinee may perform differently because of short-term fluctuations such as mood, stress level, or the relative quality of the previous night’s sleep.

    Thus, an examinee’s abilities may vary somewhat from one examination to another, thereby affecting test results. Despite these problems, alternate forms reliability has the advantage of at least reducing, if not eliminating, any carryover effects of the test-retest method. A further advantage is that the alternate test forms can be useful for other purposes, such as assessing the effects of a treatment program or monitoring a patient’s changes over time by administering the different forms on separate occasions. 

    Split Half Reliability:

    The split-half method is the best technique for determining reliability for a trait with a high degree of fluctuation. Because the test given only once, the items are split in half, and the two halves correlate. As there is only one administration, the effects of time can’t intervene as they might with the test-retest method.

    Thus, the split-half method gives a measure of the internal consistency of the test items rather than the temporal stability of different administrations of the same test. To determine split-half reliability, the test often split based on odd and even items. This method is usually adequate for most tests. Dividing the test into a first half and second half can be effective in some cases; but is often inappropriate because of the cumulative effects of warming up fatigue, and boredom; all of which can result in different levels of performance on the first half of the test compared with the second.

    As is true with the other methods of obtaining reliability; the split-half method has limitations. When a test is split in half; there are fewer items on each half; which results in wider variability because the individual responses cannot stabilize as easily around a mean. As a general principle, the longer a test is; the more reliable it is because the larger the number of items; the easier it is for the majority of items to compensate for minor alterations in responding to a few of the other items. As with the alternate forms method; differences in the content may exist between one half and another.

    Inter-scorer Reliability:

    In some tests, scoring is based partially on the judgment of the examiner. Because judgment may vary between one scorer and the next; it may be important to assess the extent to which reliability might affect. This is especially true for projects and even for some ability tests where hard scorers may produce results somewhat different from easy scorers.

    This variance in interscorer reliability may apply for global judgments based on test scores such as brain injury versus normal; or, for small details of scoring such as whether a person has given a shading versus a texture response on the Rorschach. The basic strategy for determining interscorer reliability is to obtain a series of responses from a single client and to have these responses scored by two different individuals.

    A variation is to have two different examiners test the same client using the same test; and, then to determine how close their scores or ratings of the person are. The two sets of scores can then correlate to determine a reliability coefficient. Any test that requires even partial subjectivity in scoring should provide information on interscorer reliability.

    The best form of reliability is dependent on both the nature of the variable being measured; and, the purposes for which the test uses. If the trait or ability being measured is highly stable; the test-retest method is preferable; whereas split half is more appropriate for characteristics that are highly subject to fluctuations. When using a test to make predictions, the test-retest method is preferable; because it gives an estimate of the dependability of the test from one administration to the next.

    More things:

    This is particularly true if, when determining reliability; an increased time interval existed between the two administrations. If, on the other hand, the examiner is concerned with the internal consistency and accuracy of a test for a single, one-time measure, either the split-half of the alternative forms would be best.

    Another consideration in evaluating the acceptable range of reliability is the format of the test. Longer tests usually have higher reliabilities than shorter ones. Also, the format of the responses affects reliability. For example, a true-false format is likely to have lower reliability than multiple choice because each true-false item has a 50% possibility of the answer being correct by chance.

    In contrast, each question in a multiple-choice format having five possible choices has only a 20% possibility of being correct by chance. A final consideration is that tests with various subtests or subscales should report the reliability for the overall test as well as for each of the subtests. In general, the overall test score has significantly higher reliability than its subtests. In estimating the confidence with which test scores can interpret; the examiner should take into account the lower reliabilities of the subtests.

    1] For example;

    A Full-Scale IQ on the WAIS-III can interpret with more confidence than the specific subscale scores. Most test manuals include a statistical index of the amount of error that can expect test scores; which refers to the standard error of measurement (SEM). The logic behind the SEM is that test scores consist of both truth and error.

    Thus, there is always noise or error in the system, and the SEM provides a range to indicate how extensive that error is likely to be. The range depends on the test’s reliability so that the higher the reliability, the narrower the range of error. The SEM is a standard deviation score so that, for example, an SEM of 3 on an intelligence test would indicate that an individual’s score has a 68% chance of being ± 3 IQ points from the estimated true score.

    Result of Score:

    This is because the SEM of 3 represents a band extending from -1 to +1 standard deviations above and below the mean. Likewise, there would be a 95% chance that the individual’s score would fall within a range of ± 5 points from the estimated true score. From a theoretical perspective, the SEM is a statistical index of how a person’s repeat scores on a specific test would fall around a normal distribution.

    Thus, it is a statement of the relationship among a person’s obtain score; his or her theoretically true score, and the test reliability. Because it is an empirical statement of the probable range of scores; the SEM has more practical usefulness than a knowledge of the test reliability. This band of error also refer to as a confidence interval.

    The acceptable range of reliability is difficult to identify and depends partially on the variable being measured. In general; unstable aspects (states) of the person produce lower reliabilities than stable ones (traits). Thus, in evaluating a test, the examiner should expect higher reliabilities on stable traits or abilities than on changeable states.

    2] For example;

    A person’s general fund of vocabulary words is highly stable and therefore produces high reliabilities. In contrast, a person’s level of anxiety is often highly changeable. This means examiners should not expect nearly as high reliabilities for anxiety as for an ability measure such as vocabulary. Further consideration also related to the stability of the trait; or, the ability is the method of reliability that uses.

    Alternate forms consider giving the lowest estimate of the actual reliability of a test; while split-half provides the highest estimate. Another important way to estimate the adequacy of reliability is by comparing the reliability derived on other similar tests. The examiner can then develop a sense of the expected levels of reliability, which provides a baseline for comparisons.

    Result of example;

    In the example of anxiety, a clinician may not know what is an acceptable level of reliability. A general estimate can make by comparing the reliability of the test under consideration with other tests measuring the same or a similar variable. The most important thing to keep in mind is that lower levels of reliability usually suggest that less confidence can place in the interpretations and predictions based on the test data.

    However, clinical practitioners are less likely to concern with low statistical reliability; if they have some basis for believing the test is a valid measure of the client’s state at the time of testing. The main consideration is that the sign or test score does not mean one thing at one time and something different at another.

  • Goal Commitment: Meaning and Definition

    Goal Commitment: Meaning and Definition

    Goal Commitment? What affects the strength of commitment to goals? How does this affect goal attainment? Goal commitment is our determination to pursue a course of action that will lead to the goal we aspire to achieve (Bandura, 1986). The strength of goal commitment will affect how hard one will try to attain the goal. Goal commitment affects by the properties described thus far: difficulty and specificity. For example, when goals are too difficult, commitment declines, followed by a drop-off in performance (Locke & Latham, 1990).

    What is Goal Commitment?

    “Degree to which a person determine in achieving a desired (or required) goal.”

    Goals are central to current treatments of work motivation, and goal commitment is a critical construct in understanding the relationship between goals and task performance. Despite this importance, there is confusion about the role of goal commitment, and only recently has this key construct received the empirical attention it warrants. This meta-analysis, based on 83 independent samples, updates the goal commitment literature by summarizing the accumulated evidence on the antecedents and consequences of goal commitment. Using this aggregate empirical evidence, the role of goal commitment in the goal-setting process is clarified and key areas for future research identifies.

    Commitment also affect by goal intensity, goal participation, and peer influence.

    Goal Intensity:

    Commitment is related to goal intensity, or the amount of thought or mental effort that goes into formulating a goal and how it will be attained (Locke & Latham, 1990). This is similar to goal clarification because when we clarify a goal; we involve in a conscious process of collecting information about the goal and task and our ability to attain it (Schutz, 1989).

    In a study of fifth graders, Henderson (cited in Locke & Latham, 1990) found that students who formulated a greater number of reading purposes with more detail and elaboration attained their goals to a greater extent than did students with superficial purposes. Although there was no difference in IQ scores of the groups; the students who set more goals with elaboration were better readers. It stands to reason that the more thought that gives to developing a goal; the more likely one will commit to the goal.

    Goal Participation:

    How important, motivationally, is it for people to participate in goal setting? This is an important question because goals are often assigned by others at home, school, and work. The state imparts curriculum standards or goals to teachers, who in turn impose them on students. A sales manager may assign quotas to individual salespersons. Letting individuals participate in setting goals can lead to greater satisfaction. Nevertheless, telling people to achieve a goal can influence self-efficacy; because it suggests they are capable of achieving the goal (Locke & Latham, 1990).

    To investigate the effects of assigned and self-set goals; Schunk (1985) conducted a study of sixth-grade students with LD who were learning subtraction. One group was assigned goals (e.g., “Why don’t you try to do seven pages today”). A second group set goals themselves (e.g., “Decide how many pages you can do today”). A third group worked without goals. Students who self-set goals had the highest self-efficacy and math scores. Both goal groups demonstrated higher levels of self-regulation than the control group without any goals.

    Nevertheless, Locke and Latham (1990) concluded that self-set goals are not consistently more effective than assigned goals in increasing performance. The crucial factor in assigned goals is acceptance. Once individuals become involved in a goal, the goal itself becomes more important than how it was set or whether it was imposed. Because, at work and in schools, goals are often assigned by others; the assigned goals must accept by participants. Joint participation in goal setting by teachers and students may increase the acceptance of goals.

    Peer Influence:

    One factor where teachers might be influential in promoting goal acceptance and commitment is peer influence. Strong group pressures are likely to increase commitment to goals (Locke & Latham, 1990). This group cohesiveness is more often found on athletic teams. Obviously, the coach wants a strong commitment to the team goals. In the classroom, group goals may aid the commitment of students working in cooperative learning groups and thus lead to a higher quality of work.

    An Entrepreneur will need to do if you want to commit towards achievement:

    The following achievement below are;

    Make sure that your business goals are achievable.

    The biggest enemy of achieving business goals is setting up unrealistic goals. For example, if you set the goal to increase sales by 500%; although the growth of the industry is lower than 10%, surely, 500% would be unrealistic.

    If you notice that some goal cannot be achieved, simply adjust it in the line with reality. For example, use a 15% increasing in sales instead of 500%. The goal of 15% would be much more realistic, and certainly; it will be as imperative for you and your business to achieve it because it is above-average in the industry.

    Use specific sentences in your business goals.

    Imagine the goal from our example above: increasing sales in the future. For how much we will need to increase the sales? At which time we will need to increase the sales? This is a really confusing and undetermined goal. If you don’t know what to achieve and when to achieve it, you will probably not even try to achieve it.

    Write your business goals on the paper.

    Different scientific researches prove that if you put something on a paper; your commitment to that something is will be higher. In his book Influence; The Psychology of Persuasion, Dr. Robert Cialdini gives an example from the Korean war in which the Chinese soldiers in the camps where he held prisoners (soldiers) were looking for written statements that communism is better than the US system to write on the paper. Thus a long time they were committed to his own statement in which basically they did not believe. If your business goal writes on paper they will be in a group with a higher commitment than the goals that remain only in our head.

    Determine the activities that must accomplish.

    Knowing the activities that must implement to achieve your business goals in advance will increase the level of commitment to the goal. Therefore, once you have the goal of the paper, list the activities.

    Assign responsible for each activity.

    At the end of each activity assign responsibility for implementations. In such a way, the commitment will transfer to the employees or your team members; and, at the same time will assure achievement.

    Goal Commitment
    Goal Commitment: Meaning and Definition