TalentIndikator©

We are proud of our product TalentIndikator©. Therefore we are also transparent around the trustworthyness and validity of our testing tool. Here you can read all about the test tool TalentIndikator©.

Generelt om Talent indikator Testen

Enhver test er en stikprøve af virkeligheden. Vi søger at afdække talent potentialet hos den enkelte person, så denne bliver bedre i stand til at italesætte sig selv, udvikle og udnytte egne talenter til at skabe styrker og kompetencer samt sætte disse i et systemisk univers. 

Vi forholder os aktivt til “stikprøve” usikkerhed – på hvilken måde kan testen sige noget om det, vi egentlig er på jagt efter? Jo større stikprøve jo større sikkerhed, vi har valgt at måle på 34 talenter og 3 troværdighedsindikatorer. Kontrolspørgsmål, svar, næste & Time out, samt svarfordelings analyse er vores metoder til troværdigheds kontrol. 

Vi arbejder med en kriterierelateret scoring, hvor det er testresultaterne i sig selv, der vurderes. De sammenlignes ikke med en given norm, så det bliver ikke afvigelser i forhold til et eller andet, der styrer tolkningen men det talent der har gennemført testen. 

Formålet er at kunne indikere den enkelte persons talent profil så præcist som muligt. Normative test – hvor man angiver resultatet som en afvigelse i forhold til en norm – har en større tendens til at animere til en situationsafhængig besvarelse. Det betyder at en test tager kan have en tendens til at overeksponere det som vedkommende mener er en ønskværdig rolle for det givne projekt. Normative test tager udgangspunkt i hypoteser eller statistik der viser hvad der er mest normalt, alternativet de ipsative test har som formål at eliminere kontekst og “ønsket” profil. 

Vi har valgt en ipsativ struktur hvilket reducerer graden af snyd og muligheden for at påvirke resultatet. 

Den enkelte profil rang ordner den enkelte persons præferencer ifølge dennes vurdering af en række “tvungne valg”. 

Controlling questions are a construction where an amount of questions is exposed twice during the course of the test, and measures on the grade of agreemenet, on an absolut as well as a relative dimension. We are looking for the grade of consistency in the answers.

To simulate the pressures of reality, there is given a maximum timeframe of 20 seconds for answers. We measure the used time, amount of times the time ran out, and finally the respondant has a choice of actively skipping a question. We have this as a possibility to secure that the respondant is not forced to make choice on something that they do not know. This give us the possibility of precisely measuring where the person is uncertain and maybe controlled by a given situation.

The last parameter – a topological illustration of the answer distribution, where we search for a mirroring in the central axis, that with the chosen algorithm is taken as an expression for the person completing the test with a high rate of clarified self insight.

Under one we use these 3 parameters + the persons time spent as an indicator of the collected validity and reliability of the answers.

Talentindikatoren© is built with an ipsative scoring structure. All choices and skips come from a forced ranking of two simultaneous equivalent statements.

The purpose is to compare a persons relative preference for different value sets and not the persons absolut preference for each of the activities compared to other people. Ipsative scoring systems are systems that are suitable for ranking a persons score.

Does the test measure what you expect? Will the test be reliable? Does the test show the same result over time?

When we discuss validity, is is because we wish for as high a grade of predicting validity as possible – this is why we make the test in the first place.

We consider validity through 3 different levels: Rules, Guidelines and Results.

Rules

The Data Protection Authority have demands in their capabilities as administrators of personal data laws related to handling and processing of personal data. We work from 4 principles:

  • Availability – That a specified group of TalentInsights employees at the right time and place have access to the information that the person has given consent to them using.
  • Confidentiality – that others, who shouldn’t have access to processing this information, will be stopped in doing so.
  • Integrity – that data is, what they claim to be, this means that they are not changed (for example deletion), without this being stated clearly.
  • Traceability – that which can be documented who has completed it, seen, changed, deleted or in other ways processed data.

We are approved by the Data Protection Authorities as data processors of personal data.

TalentCloud is executed through a 128bit crypted SSL link at i23 and the server is certified through approved standards.

This in part means, that every approach to data needs to be secured through authenticity, so that you are as sure as possible that the person who has given the answers is the right person, and that others have not had the opportunity to change the registered data.

Data is stored in TalentCloud for 6 months, after which the data is anonymized.

Read about our processing of data here.

Guidelines

We comply with guidelines given by:

  • Dansk Psykologforening5 (1999)
  • Videnscenter for Professionel Personvurdering6 (2011)

as to how you should act in connection with personality tests, in our own instrutions and work methods

Interpretation of results – validity

Interpretation of results is handled from 2 guidelines. The internal and the external, where the internal reflects the relations we in our model structure have put in. The external is an expression as to whether the people we evaluate can confirm the picture we are drawing of them.

The internal validity around the results has been secured by:

  • Asking 408 questions formed as 204 paired positive statements
  • Asking about every factor (talent) 12 different times
  • Have built in 16 controlling questions
  • Have an ipsative scoring structure
  • Running with time management on all questions (maximum given amount of time per question is 20 seconds.)
  • Not giving the possibility of correcting earlier given answers
  • Using a scale without a neutral midpoint

The external validity has been secured by the fact that there at this given moment are no test persons, who have been able to discard the test results – there are a few that have commented on the order of the individual talents, but not on the existence of the relevant feature.

We often do statistical evaluations of the underlying model. What we often verify, is the scale of independence between the 34 factors. It is important for the models integrity and validity that the characteristics we mirror are so robust and unambiguous as possible.

Interpretation of results – reliability

  • The reliability in the test is controlled in part by a longitudinal and partly a splithalf test design.
  • The accumulated base of test results gets tested every 12 months, where the tests from the last 12 months are measured in relation to all previously given, also called a test-retest reliability.
  • In part we split the results randomly in two groupings, and the reliability between these two is validated.
  • The reliability measures in both cases the uniformity over time and across of the population. We have not been able to find measuring errors in the sisten since it was released in 2005.
  • We measure with Cronbach Alpha on the whole population as well as split-half.

Test of correlations

The average correlation between the 561 pairwise correlations was last time measured at 0,144 and latest 2020 measured at 0,122. This means that with a 99% confidence interval on respectively positive as negative correlations, the analysis shows that it is between 91,35% and 96,67% sure, that there is no correlation between the 34 individual talents. In other words this means that the model structure secures, that there is no statistical correlation between the way the individual preferences are uncovered.

Test of independence with the help of random numbers

We have additionally made a test, that compares the empirical data with a randomly generated data population. A so called Monte-Carlo drive of 561 hypothetical correlations, placed in the interval [-0,405;0,457] – this means the smallest and the biggest in the empirical material. We have used Excels add-on program for Data Analysis, random numbers and an even distribution.

If we look at 99% confidence interval, lower limit coincidence (-0,004) and upper limit for the empirical drive (-0,014) they are placed very close to each other. This means that 1/100 of the distance in the interval [0; -1]. Or 1/200, which means per thousand in relation to the correlations possible outcome area [-1; 1].