This post by raph on multidimensional interpolation points to a pretty good survey on scattered data interpolation, which is a problem I have tried to tackle in the past (and present, in a way). It would have been nice to see this survey back then :).
It's been a while since I thought about such issues, but there are interesting connections here to various problems in computational geometry and graphics.
raph also points out that the survey does not give clear guidance, one way or another, which is quite understandable given the range of problems people solve with scattered data interpolation.
--
I've been looking through some of raph's recent posts from the perspective of data mining. Data mining is a relatively new field that tries to extract knowledge from large amounts of data. It has roots in database technology, statistics, machine learning, visualization, and algorithmics.
I'm curious how work in data mining could be applied, if it hasn't been already, to work on trust metrics. In particular, what little I've seen of such work seems to often be concerned with directed attacks, trying to prove that a system can robustly survive in the face of would be danger, or at least gracefully degrade.
I'm not sure if work on outlier detection mainly happens in data mining or networking, but there are many difficult problems to solve, it would seem. PayPal, for example, I have heard, tries to detect attacks that involve some notion of a clique or chain. Similarly, insurance companies want to detect if a group of cars form a path of destruction, literally, in that some nefarious gang of thieves has arranged a chain of crashes to bilk the insurance companies.
I have my own intuitions about how problems might be solved, but nothing concrete yet.
I'm not quite sure what to think about "trust". As someone who is not familiar with the literature, when I see this word, I think of sensitivity and perturbation analysis from scientific computing (numerical analysis). Given some system, how much effect can small perturbations in the input have on the resultant output? I think of robust data streams and adversary arguments from theoretical computer science.
In particular, my first thought would be to employ hierarchies as a natural way of understanding how humans trust. Human trust is not so infallible a thing, but perhaps it is a start. Then again, I reiterate my lack of knowledge of the field. I merely give my first impressions.
My initial reaction would be to be wary of trust metric systems that are completely automated, but this is probably too vague a claim. In practice, there is some human intervention.
I do find recent work, on finding ways to differentiate humans from computers, to be an interesting slant on things. Even more funny, perhaps, is that again people just bypass such mechanisms, hiring pasty faced teens to read scribbly numbers or, in general, recognize hidden patterns, something computers can't quite do well yet.
The joke that is often made here is that computers are not adapting to us, we are adapting to them. We modulate our credit behavior so that some computer algorithm thinks well of us, and here we are putting our teens at work, having them live up to their full human potential by clearing the path for spam meisters everywhere.