#1332 closed defect (bug) (fixed)
Caching issues with profile data when using a persistent cache
Reported by: | swinton | Owned by: | |
---|---|---|---|
Milestone: | 2.0 | Priority: | major |
Severity: | normal | Version: | |
Component: | Extended Profile | Keywords: | |
Cc: |
Description
I've noticed some issues with cached profile data when using a persistent cache such as memcached.
The cached profile data isn't expired when new profile fields are added or when profile data is changed.
To reproduce:
- Configure WPMU/BP to use a persistent cache, such as memcached (e.g. install this plugin under /wp-content -> http://plugins.trac.wordpress.org/browser/memcached/trunk/object-cache.php and then run a memcached server)
- Load a member's profile page, and their edit profile page (so that their profile data is stored by memcached)
- Add a new profile field, and reload the edit profile page (the new profile field isn't shown)
- Update an existing profile field, and save the data (the new profile field data isn't shown)
The problem is that the profile data cache isn't being expired at the appropriate times, e.g. when a new profile field is added, when profile data changes etc.
This can be worked around using a plugin, to delete the cache at the appropriate times, using the provided BP hooks (example attached).
Attachments (1)
Change History (23)
#4
@
14 years ago
- Component set to Core
- Milestone changed from 1.3 to Future Release
I like the idea in this plugin, but it will have to be adapted considerably to work properly with the current trunk.
This ticket was mentioned in IRC in #buddypress-dev by r-a-y. View the logs.
11 years ago
#6
@
11 years ago
- Component changed from Core to XProfile
- Keywords xprofile cache memcached removed
- Milestone changed from Future Release to 2.0
- Severity set to normal
Moving this ticket to the 2.0 milestone, to serve as a master ticket for caching improvements in the xprofile component.
#9
@
11 years ago
We should use a different cache group for each component; we may need even more granular cache groups but one per component is a good starting point pending real world feedback.
#10
@
11 years ago
DJPaul - Is there a technical reason for grouping other than namespaces and ease of invalidation?
#11
@
11 years ago
Yes. I should first say my (limited) experience is based around using Memcached as the object caching backend. It's possible that other object caches might behave differently, I don't know.
Actually, groups don't make invalidation any easier[0]; it's not possible to delete all items in a group. This is why sometimes you see wp_cache_incr()
function used to help build a cache key name. If you have a bunch of items that rely on one common other thing and you need all the items to be invalidated/deleted at the same time, wp_cache_incr()
helps: you're never explicitly telling the object cache to delete a bunch of items, instead you're telling the object cache to fetch/set against a new key.
The data stored against the old key is ignored/not used by your application and is eventually removed by the object cache purging old data in least recently used (LRU[1]) order.[2]
I'm reliably informed that, namespacing benefits aside, all groups really do is make it easier to distribute cached data across multiple servers. For example, you could optimise your infrastructure by having "expensive but not hot" data stored in a separate (object caching) server; this will help avoid LRU purging that you might get in a pool of servers that have a lot of cache writes, in turn causing more of the expensive DB queries that you're caching against in the first place.
For memcached specifically, there is also a maximum cache bucket (aka group) size of 1MB. If the bucket size exceeds 1MB, everything is dropped. Hopefully we aren't going to storing or fetching a whole megabyte from cache because if we do, we're probably doing something wrong :)
[0] http://scotty-t.com/2012/01/20/wordpress-memcached/
[1] http://stackoverflow.com/questions/10122384/memcache-and-expired-items
[2] Unless you set an expiry on a particular item, of course.
#12
@
11 years ago
namespacing benefits aside, all groups really do is make it easier to distribute cached data across multiple servers
Excellent, this is really helpful to know.
Your point about reaching maximum cache bucket size is helpful to keep in mind. It's probably not true that BP itself will keep that much in a single bucket/group. But it's my understanding that some cache backends (like APC) can initiate a complete cache flush if the total cache size limit is reached, at least on certain configurations. See eg http://stackoverflow.com/questions/1053810/php-apc-what-happens-when-apc-cache-is-full and http://tollmanz.com/invalidation-schemes/, especially the section "A Cache Only Pattern". Because BP should try to be agnostic wrt caching backends, I think it's good policy for us to manually delete cache items where possible, rather than assuming that it'll happen gracefully on its own.
#20
@
11 years ago
- Resolution set to fixed
- Status changed from new to closed
I've made a fairly thorough sweep of the queries made in the xprofile query classes. I've added support for persistent caching, along with the relevant logic for invalidation, in a number of critical places. While there are still a number of places where queries are not cached, I think I've gotten most of the queries that are run most frequently, and are most likely to cause performance issues at scale.
There are a few remaining spots in the component that could benefit from some caching. In some of these cases (like the case of the 'fields' query in BP_XProfile_Group::get()) it'd take fairly significant refactoring and a complex caching/invalidation schema to be sensitive to all relevant parameters. In other cases, the queries are called so infrequently that the performance benefits would be minimal.
I think I've added enough to justify marking this ticket as resolved, at least for the milestone. If problems arise, or anyone spots more places throughout xprofile that could benefit from object caching of some sort, please open a new ticket with details.
I may bring this back to 1.2 depending on how much time there is. The solution in the patch is too involved to consider for 1.2, but it may be possible to fix this just by adding the missing "clear_cache" actions.